Test Report: Docker_Linux_crio_arm64 22094

                    
                      4d318e45b0dac190a241a23c5ddc63ef7c67bab3:2025-12-10:42711
                    
                

Test fail (41/316)

Order failed test Duration
29 TestDownloadOnlyKic 1.01
38 TestAddons/serial/Volcano 0.36
44 TestAddons/parallel/Registry 14.2
45 TestAddons/parallel/RegistryCreds 0.51
46 TestAddons/parallel/Ingress 146.16
47 TestAddons/parallel/InspektorGadget 6.28
48 TestAddons/parallel/MetricsServer 5.38
50 TestAddons/parallel/CSI 38.18
51 TestAddons/parallel/Headlamp 3.29
52 TestAddons/parallel/CloudSpanner 5.28
53 TestAddons/parallel/LocalPath 10.47
54 TestAddons/parallel/NvidiaDevicePlugin 6.36
55 TestAddons/parallel/Yakd 5.29
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 512.12
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 369.83
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 2.41
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 2.52
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 2.53
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 737.28
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 2.27
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 0.07
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 1.78
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 3.21
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 2.44
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 241.69
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 1.44
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 0.12
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 106.52
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.27
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.29
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.26
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.29
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.26
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 2.41
293 TestJSONOutput/pause/Command 2.4
299 TestJSONOutput/unpause/Command 1.77
358 TestKubernetesUpgrade 806.5
384 TestPause/serial/Pause 6.53
478 TestNetworkPlugins/group/enable-default-cni/NetCatPod 7200.084
x
+
TestDownloadOnlyKic (1.01s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-800978 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:239: expected tarball file "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4" to exist, but got error: stat /home/jenkins/minikube-integration/22094-362392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4: no such file or directory
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-800978" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-800978
--- FAIL: TestDownloadOnlyKic (1.01s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-241520 addons disable volcano --alsologtostderr -v=1: exit status 11 (362.906598ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:11:57.992781  372136 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:11:57.994297  372136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:11:57.994330  372136 out.go:374] Setting ErrFile to fd 2...
	I1210 06:11:57.994338  372136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:11:57.994722  372136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:11:57.995089  372136 mustload.go:66] Loading cluster: addons-241520
	I1210 06:11:57.995604  372136 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:11:57.995629  372136 addons.go:622] checking whether the cluster is paused
	I1210 06:11:57.995803  372136 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:11:57.995821  372136 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:11:57.996517  372136 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:11:58.038959  372136 ssh_runner.go:195] Run: systemctl --version
	I1210 06:11:58.039026  372136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:11:58.070390  372136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:11:58.188034  372136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:11:58.188176  372136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:11:58.235790  372136 cri.go:89] found id: "e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea"
	I1210 06:11:58.235815  372136 cri.go:89] found id: "15f67f2cc14b2c44db28ba03b966823a38ba5501db1d7b2f9c2e0dacfd4ed1b1"
	I1210 06:11:58.235821  372136 cri.go:89] found id: "706727cbc03fa4d0a9bfef0bf8e680d6d87db98a72d0d1e64871123e56c84474"
	I1210 06:11:58.235825  372136 cri.go:89] found id: "32f373b06842fe1cd3594c2e992c1a68b946614f87ec5014e49bd7b5c6fc9597"
	I1210 06:11:58.235828  372136 cri.go:89] found id: "cc2087138549b52903121596d8e7319535780a3b85708b719e3170b8e961b18c"
	I1210 06:11:58.235832  372136 cri.go:89] found id: "3edcc847365a8156b3cc6d2fd2aefa985fe161b63c86a2c627e7e5d5b700bbb3"
	I1210 06:11:58.235840  372136 cri.go:89] found id: "e9f72b624d9a04a42ed512b19992407ac0e1076bf5f9d035f9f7ea79d5b54533"
	I1210 06:11:58.235844  372136 cri.go:89] found id: "b3d13279bb1f98801f86c5a0bc4f1d1818687f61d834d6108b96a51de639d2ad"
	I1210 06:11:58.235847  372136 cri.go:89] found id: "b11ba380657e90d096767d69b45a8f8c192d3575bd2bb1300a880a5e46be0218"
	I1210 06:11:58.235853  372136 cri.go:89] found id: "7c4c997d687b5587da6adee9ccf8eefbacea6a48c77c823e1449a7aeccb74825"
	I1210 06:11:58.235857  372136 cri.go:89] found id: "7cf2b1b068ab59188e8c5fb5b292a128274569b06b59310a10102bb992ec0ee8"
	I1210 06:11:58.235860  372136 cri.go:89] found id: "ec50b512c8e8c3fe1f2a4815e23cd7b44032538e1b51724eaf4397367fde1033"
	I1210 06:11:58.235863  372136 cri.go:89] found id: "fcb9b12f636ffd5f790e8e1e75b50ea39f36342e5170530163aa43601f9a319f"
	I1210 06:11:58.235867  372136 cri.go:89] found id: "5bf1539b0ce43fe90678001281ab38e24a894f5137e22c376c0ddbda31d3a327"
	I1210 06:11:58.235870  372136 cri.go:89] found id: "ccf62bd56b5d147a9c3fd61622c7c024ac32876116a8d90e388b5ab69b50d5b8"
	I1210 06:11:58.235875  372136 cri.go:89] found id: "c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749"
	I1210 06:11:58.235882  372136 cri.go:89] found id: "75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e"
	I1210 06:11:58.235886  372136 cri.go:89] found id: "c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7"
	I1210 06:11:58.235890  372136 cri.go:89] found id: "ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f"
	I1210 06:11:58.235893  372136 cri.go:89] found id: "a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af"
	I1210 06:11:58.235897  372136 cri.go:89] found id: "a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554"
	I1210 06:11:58.235902  372136 cri.go:89] found id: "ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272"
	I1210 06:11:58.235906  372136 cri.go:89] found id: "88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f"
	I1210 06:11:58.235909  372136 cri.go:89] found id: ""
	I1210 06:11:58.235961  372136 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:11:58.252215  372136 out.go:203] 
	W1210 06:11:58.255293  372136 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:11:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:11:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:11:58.255322  372136 out.go:285] * 
	* 
	W1210 06:11:58.264204  372136 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:11:58.267286  372136 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-241520 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.36s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 12.924122ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-jv6bp" [98fabc13-aaf3-44ca-bcbd-2817949dcb72] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003222884s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-pfbv5" [66beaf69-9d47-4b33-bba2-32ca2a101bcc] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003956677s
addons_test.go:394: (dbg) Run:  kubectl --context addons-241520 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-241520 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-241520 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.587791588s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 ip
2025/12/10 06:12:23 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-241520 addons disable registry --alsologtostderr -v=1: exit status 11 (292.994982ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:12:23.584035  373055 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:12:23.584806  373055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:23.584848  373055 out.go:374] Setting ErrFile to fd 2...
	I1210 06:12:23.584870  373055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:23.585225  373055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:12:23.585574  373055 mustload.go:66] Loading cluster: addons-241520
	I1210 06:12:23.586015  373055 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:23.586062  373055 addons.go:622] checking whether the cluster is paused
	I1210 06:12:23.586207  373055 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:23.586245  373055 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:12:23.586801  373055 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:12:23.604914  373055 ssh_runner.go:195] Run: systemctl --version
	I1210 06:12:23.604980  373055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:12:23.635744  373055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:12:23.752397  373055 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:12:23.752496  373055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:12:23.784207  373055 cri.go:89] found id: "e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea"
	I1210 06:12:23.784232  373055 cri.go:89] found id: "15f67f2cc14b2c44db28ba03b966823a38ba5501db1d7b2f9c2e0dacfd4ed1b1"
	I1210 06:12:23.784244  373055 cri.go:89] found id: "706727cbc03fa4d0a9bfef0bf8e680d6d87db98a72d0d1e64871123e56c84474"
	I1210 06:12:23.784249  373055 cri.go:89] found id: "32f373b06842fe1cd3594c2e992c1a68b946614f87ec5014e49bd7b5c6fc9597"
	I1210 06:12:23.784252  373055 cri.go:89] found id: "cc2087138549b52903121596d8e7319535780a3b85708b719e3170b8e961b18c"
	I1210 06:12:23.784258  373055 cri.go:89] found id: "3edcc847365a8156b3cc6d2fd2aefa985fe161b63c86a2c627e7e5d5b700bbb3"
	I1210 06:12:23.784261  373055 cri.go:89] found id: "e9f72b624d9a04a42ed512b19992407ac0e1076bf5f9d035f9f7ea79d5b54533"
	I1210 06:12:23.784265  373055 cri.go:89] found id: "b3d13279bb1f98801f86c5a0bc4f1d1818687f61d834d6108b96a51de639d2ad"
	I1210 06:12:23.784269  373055 cri.go:89] found id: "b11ba380657e90d096767d69b45a8f8c192d3575bd2bb1300a880a5e46be0218"
	I1210 06:12:23.784275  373055 cri.go:89] found id: "7c4c997d687b5587da6adee9ccf8eefbacea6a48c77c823e1449a7aeccb74825"
	I1210 06:12:23.784279  373055 cri.go:89] found id: "7cf2b1b068ab59188e8c5fb5b292a128274569b06b59310a10102bb992ec0ee8"
	I1210 06:12:23.784283  373055 cri.go:89] found id: "ec50b512c8e8c3fe1f2a4815e23cd7b44032538e1b51724eaf4397367fde1033"
	I1210 06:12:23.784287  373055 cri.go:89] found id: "fcb9b12f636ffd5f790e8e1e75b50ea39f36342e5170530163aa43601f9a319f"
	I1210 06:12:23.784291  373055 cri.go:89] found id: "5bf1539b0ce43fe90678001281ab38e24a894f5137e22c376c0ddbda31d3a327"
	I1210 06:12:23.784298  373055 cri.go:89] found id: "ccf62bd56b5d147a9c3fd61622c7c024ac32876116a8d90e388b5ab69b50d5b8"
	I1210 06:12:23.784303  373055 cri.go:89] found id: "c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749"
	I1210 06:12:23.784307  373055 cri.go:89] found id: "75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e"
	I1210 06:12:23.784311  373055 cri.go:89] found id: "c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7"
	I1210 06:12:23.784314  373055 cri.go:89] found id: "ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f"
	I1210 06:12:23.784317  373055 cri.go:89] found id: "a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af"
	I1210 06:12:23.784321  373055 cri.go:89] found id: "a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554"
	I1210 06:12:23.784333  373055 cri.go:89] found id: "ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272"
	I1210 06:12:23.784337  373055 cri.go:89] found id: "88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f"
	I1210 06:12:23.784339  373055 cri.go:89] found id: ""
	I1210 06:12:23.784391  373055 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:12:23.800507  373055 out.go:203] 
	W1210 06:12:23.803416  373055 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:12:23.803447  373055 out.go:285] * 
	* 
	W1210 06:12:23.808631  373055 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:12:23.811498  373055 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-241520 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.20s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 5.742016ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-241520
addons_test.go:334: (dbg) Run:  kubectl --context addons-241520 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-241520 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (264.353986ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:13:19.034885  374586 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:13:19.036010  374586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:13:19.036052  374586 out.go:374] Setting ErrFile to fd 2...
	I1210 06:13:19.036073  374586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:13:19.036376  374586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:13:19.036747  374586 mustload.go:66] Loading cluster: addons-241520
	I1210 06:13:19.037317  374586 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:13:19.037362  374586 addons.go:622] checking whether the cluster is paused
	I1210 06:13:19.037513  374586 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:13:19.037545  374586 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:13:19.038161  374586 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:13:19.058776  374586 ssh_runner.go:195] Run: systemctl --version
	I1210 06:13:19.059496  374586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:13:19.078346  374586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:13:19.183826  374586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:13:19.183911  374586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:13:19.217264  374586 cri.go:89] found id: "e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea"
	I1210 06:13:19.217289  374586 cri.go:89] found id: "15f67f2cc14b2c44db28ba03b966823a38ba5501db1d7b2f9c2e0dacfd4ed1b1"
	I1210 06:13:19.217294  374586 cri.go:89] found id: "706727cbc03fa4d0a9bfef0bf8e680d6d87db98a72d0d1e64871123e56c84474"
	I1210 06:13:19.217298  374586 cri.go:89] found id: "32f373b06842fe1cd3594c2e992c1a68b946614f87ec5014e49bd7b5c6fc9597"
	I1210 06:13:19.217302  374586 cri.go:89] found id: "cc2087138549b52903121596d8e7319535780a3b85708b719e3170b8e961b18c"
	I1210 06:13:19.217311  374586 cri.go:89] found id: "3edcc847365a8156b3cc6d2fd2aefa985fe161b63c86a2c627e7e5d5b700bbb3"
	I1210 06:13:19.217315  374586 cri.go:89] found id: "e9f72b624d9a04a42ed512b19992407ac0e1076bf5f9d035f9f7ea79d5b54533"
	I1210 06:13:19.217318  374586 cri.go:89] found id: "b3d13279bb1f98801f86c5a0bc4f1d1818687f61d834d6108b96a51de639d2ad"
	I1210 06:13:19.217321  374586 cri.go:89] found id: "b11ba380657e90d096767d69b45a8f8c192d3575bd2bb1300a880a5e46be0218"
	I1210 06:13:19.217333  374586 cri.go:89] found id: "7c4c997d687b5587da6adee9ccf8eefbacea6a48c77c823e1449a7aeccb74825"
	I1210 06:13:19.217340  374586 cri.go:89] found id: "7cf2b1b068ab59188e8c5fb5b292a128274569b06b59310a10102bb992ec0ee8"
	I1210 06:13:19.217344  374586 cri.go:89] found id: "ec50b512c8e8c3fe1f2a4815e23cd7b44032538e1b51724eaf4397367fde1033"
	I1210 06:13:19.217348  374586 cri.go:89] found id: "fcb9b12f636ffd5f790e8e1e75b50ea39f36342e5170530163aa43601f9a319f"
	I1210 06:13:19.217351  374586 cri.go:89] found id: "5bf1539b0ce43fe90678001281ab38e24a894f5137e22c376c0ddbda31d3a327"
	I1210 06:13:19.217355  374586 cri.go:89] found id: "ccf62bd56b5d147a9c3fd61622c7c024ac32876116a8d90e388b5ab69b50d5b8"
	I1210 06:13:19.217363  374586 cri.go:89] found id: "c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749"
	I1210 06:13:19.217371  374586 cri.go:89] found id: "75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e"
	I1210 06:13:19.217376  374586 cri.go:89] found id: "c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7"
	I1210 06:13:19.217384  374586 cri.go:89] found id: "ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f"
	I1210 06:13:19.217388  374586 cri.go:89] found id: "a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af"
	I1210 06:13:19.217392  374586 cri.go:89] found id: "a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554"
	I1210 06:13:19.217401  374586 cri.go:89] found id: "ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272"
	I1210 06:13:19.217405  374586 cri.go:89] found id: "88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f"
	I1210 06:13:19.217407  374586 cri.go:89] found id: ""
	I1210 06:13:19.217457  374586 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:13:19.232951  374586 out.go:203] 
	W1210 06:13:19.235760  374586 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:13:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:13:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:13:19.235786  374586 out.go:285] * 
	* 
	W1210 06:13:19.240826  374586 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:13:19.243739  374586 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-241520 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-241520 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-241520 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-241520 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [5840244e-718c-4c7e-9747-3f7240f6f886] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [5840244e-718c-4c7e-9747-3f7240f6f886] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00431924s
I1210 06:12:45.874523  364265 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-241520 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.775847278s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-241520 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-241520
helpers_test.go:244: (dbg) docker inspect addons-241520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7dbf6b06e352dbd6152202fc867d8ff6338e749c3d61ff59432146a48f5744c9",
	        "Created": "2025-12-10T06:09:53.53362706Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 365685,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:09:53.601853467Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7dbf6b06e352dbd6152202fc867d8ff6338e749c3d61ff59432146a48f5744c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7dbf6b06e352dbd6152202fc867d8ff6338e749c3d61ff59432146a48f5744c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/7dbf6b06e352dbd6152202fc867d8ff6338e749c3d61ff59432146a48f5744c9/hosts",
	        "LogPath": "/var/lib/docker/containers/7dbf6b06e352dbd6152202fc867d8ff6338e749c3d61ff59432146a48f5744c9/7dbf6b06e352dbd6152202fc867d8ff6338e749c3d61ff59432146a48f5744c9-json.log",
	        "Name": "/addons-241520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-241520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-241520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7dbf6b06e352dbd6152202fc867d8ff6338e749c3d61ff59432146a48f5744c9",
	                "LowerDir": "/var/lib/docker/overlay2/967d8bfa3f0b3c27b0b58d3acbfb1200cca1e98ed4b61902757132649cc8a30f-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/967d8bfa3f0b3c27b0b58d3acbfb1200cca1e98ed4b61902757132649cc8a30f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/967d8bfa3f0b3c27b0b58d3acbfb1200cca1e98ed4b61902757132649cc8a30f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/967d8bfa3f0b3c27b0b58d3acbfb1200cca1e98ed4b61902757132649cc8a30f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-241520",
	                "Source": "/var/lib/docker/volumes/addons-241520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-241520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-241520",
	                "name.minikube.sigs.k8s.io": "addons-241520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1973f5240a275d3bf2704705407b46fa337b7c75daf0b14a721ed8ffbaa5367a",
	            "SandboxKey": "/var/run/docker/netns/1973f5240a27",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-241520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:9a:2d:49:a0:26",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "845a265d0e7b5e3a9437720e96236d256b61ca93174566fc563d2fd856a8dc10",
	                    "EndpointID": "76a5c1d9b65a6e58e5f2a25ed27a92da9d44f890eaa210c669fdc5cd280fb488",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-241520",
	                        "7dbf6b06e352"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-241520 -n addons-241520
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-241520 logs -n 25: (1.485244461s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-800978                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-800978 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ start   │ --download-only -p binary-mirror-172562 --alsologtostderr --binary-mirror http://127.0.0.1:37171 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-172562   │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	│ delete  │ -p binary-mirror-172562                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-172562   │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ addons  │ disable dashboard -p addons-241520                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	│ addons  │ enable dashboard -p addons-241520                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	│ start   │ -p addons-241520 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:11 UTC │
	│ addons  │ addons-241520 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:11 UTC │                     │
	│ addons  │ addons-241520 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ addons  │ enable headlamp -p addons-241520 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ addons  │ addons-241520 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ addons  │ addons-241520 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ ip      │ addons-241520 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │ 10 Dec 25 06:12 UTC │
	│ addons  │ addons-241520 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ addons  │ addons-241520 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ addons  │ addons-241520 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ ssh     │ addons-241520 ssh cat /opt/local-path-provisioner/pvc-0d09bd84-80dd-472f-be69-d05ede5b8612_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │ 10 Dec 25 06:12 UTC │
	│ addons  │ addons-241520 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ addons  │ addons-241520 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ ssh     │ addons-241520 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ addons  │ addons-241520 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │                     │
	│ addons  │ addons-241520 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │                     │
	│ addons  │ addons-241520 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-241520                                                                                                                                                                                                                                                                                                                                                                                           │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │ 10 Dec 25 06:13 UTC │
	│ addons  │ addons-241520 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:13 UTC │                     │
	│ ip      │ addons-241520 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:14 UTC │ 10 Dec 25 06:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:09:32
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:09:32.529123  365349 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:09:32.529326  365349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:09:32.529350  365349 out.go:374] Setting ErrFile to fd 2...
	I1210 06:09:32.529372  365349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:09:32.529770  365349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:09:32.530383  365349 out.go:368] Setting JSON to false
	I1210 06:09:32.531773  365349 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10325,"bootTime":1765336648,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:09:32.531855  365349 start.go:143] virtualization:  
	I1210 06:09:32.535070  365349 out.go:179] * [addons-241520] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:09:32.539042  365349 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:09:32.539183  365349 notify.go:221] Checking for updates...
	I1210 06:09:32.545122  365349 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:09:32.548050  365349 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:09:32.550947  365349 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:09:32.553883  365349 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:09:32.556785  365349 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:09:32.559912  365349 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:09:32.593384  365349 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:09:32.593553  365349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:09:32.654346  365349 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 06:09:32.643860997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:09:32.654460  365349 docker.go:319] overlay module found
	I1210 06:09:32.657509  365349 out.go:179] * Using the docker driver based on user configuration
	I1210 06:09:32.660314  365349 start.go:309] selected driver: docker
	I1210 06:09:32.660341  365349 start.go:927] validating driver "docker" against <nil>
	I1210 06:09:32.660355  365349 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:09:32.661114  365349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:09:32.717367  365349 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 06:09:32.707347217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:09:32.717535  365349 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:09:32.717752  365349 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:09:32.720671  365349 out.go:179] * Using Docker driver with root privileges
	I1210 06:09:32.723697  365349 cni.go:84] Creating CNI manager for ""
	I1210 06:09:32.723769  365349 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:09:32.723782  365349 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:09:32.723855  365349 start.go:353] cluster config:
	{Name:addons-241520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-241520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1210 06:09:32.726925  365349 out.go:179] * Starting "addons-241520" primary control-plane node in "addons-241520" cluster
	I1210 06:09:32.729874  365349 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:09:32.732804  365349 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:09:32.735675  365349 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:09:32.735777  365349 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:09:32.750125  365349 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 06:09:32.750266  365349 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 06:09:32.750294  365349 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1210 06:09:32.750305  365349 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1210 06:09:32.750313  365349 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1210 06:09:32.750323  365349 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	W1210 06:09:32.789648  365349 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 06:09:32.837274  365349 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 06:09:32.837680  365349 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/config.json ...
	I1210 06:09:32.837737  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/config.json: {Name:mk64eb852ee62fa3403e6dbb125af50407f65a90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:09:32.838038  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:09:33.009068  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:09:33.175029  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:09:33.360055  365349 cache.go:107] acquiring lock: {Name:mk02212e897dba66869d457b3bbeea186c9977d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360151  365349 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360245  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:09:33.360263  365349 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3" took 226.293µs
	I1210 06:09:33.360365  365349 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:09:33.360286  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:09:33.360387  365349 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 414.654µs
	I1210 06:09:33.360400  365349 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:09:33.360306  365349 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360454  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:09:33.360467  365349 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 163.498µs
	I1210 06:09:33.360474  365349 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:09:33.360341  365349 cache.go:107] acquiring lock: {Name:mk528ea302435a8d73a952727ebcf4c5d4bd15a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360763  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:09:33.360779  365349 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 438.433µs
	I1210 06:09:33.360787  365349 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:09:33.360613  365349 cache.go:107] acquiring lock: {Name:mkcde84ea8e341b56c14a9da0ddd80f253a2bcdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360835  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:09:33.360848  365349 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3" took 246.232µs
	I1210 06:09:33.360855  365349 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:09:33.360642  365349 cache.go:107] acquiring lock: {Name:mkd358dfd00c757fa5e4489a81c6d55b1de8de5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360658  365349 cache.go:107] acquiring lock: {Name:mk1e8ea2965a60a26ea6e464eb610a6affff1a11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360935  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:09:33.360940  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:09:33.360943  365349 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3" took 286.045µs
	I1210 06:09:33.360950  365349 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:09:33.360325  365349 cache.go:107] acquiring lock: {Name:mk028ba2317f3b1c037987bf153e02fff8ae3e15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360952  365349 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3" took 311.037µs
	I1210 06:09:33.360968  365349 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:09:33.360973  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:09:33.360978  365349 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 654.527µs
	I1210 06:09:33.360984  365349 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:09:33.361011  365349 cache.go:87] Successfully saved all images to host disk.
	I1210 06:09:51.014177  365349 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1210 06:09:51.014224  365349 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:09:51.014280  365349 start.go:360] acquireMachinesLock for addons-241520: {Name:mke5e792482575a95955cce7f5f982a5b20edf07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:51.014420  365349 start.go:364] duration metric: took 113.684µs to acquireMachinesLock for "addons-241520"
	I1210 06:09:51.014462  365349 start.go:93] Provisioning new machine with config: &{Name:addons-241520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-241520 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:09:51.014542  365349 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:09:51.018199  365349 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1210 06:09:51.018466  365349 start.go:159] libmachine.API.Create for "addons-241520" (driver="docker")
	I1210 06:09:51.018505  365349 client.go:173] LocalClient.Create starting
	I1210 06:09:51.018617  365349 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem
	I1210 06:09:51.211349  365349 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem
	I1210 06:09:51.538996  365349 cli_runner.go:164] Run: docker network inspect addons-241520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:09:51.554794  365349 cli_runner.go:211] docker network inspect addons-241520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:09:51.554875  365349 network_create.go:284] running [docker network inspect addons-241520] to gather additional debugging logs...
	I1210 06:09:51.554910  365349 cli_runner.go:164] Run: docker network inspect addons-241520
	W1210 06:09:51.570430  365349 cli_runner.go:211] docker network inspect addons-241520 returned with exit code 1
	I1210 06:09:51.570463  365349 network_create.go:287] error running [docker network inspect addons-241520]: docker network inspect addons-241520: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-241520 not found
	I1210 06:09:51.570477  365349 network_create.go:289] output of [docker network inspect addons-241520]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-241520 not found
	
	** /stderr **
	I1210 06:09:51.570582  365349 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:09:51.586211  365349 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a21be0}
	I1210 06:09:51.586257  365349 network_create.go:124] attempt to create docker network addons-241520 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 06:09:51.586315  365349 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-241520 addons-241520
	I1210 06:09:51.646295  365349 network_create.go:108] docker network addons-241520 192.168.49.0/24 created
	I1210 06:09:51.646332  365349 kic.go:121] calculated static IP "192.168.49.2" for the "addons-241520" container
	I1210 06:09:51.646442  365349 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:09:51.661902  365349 cli_runner.go:164] Run: docker volume create addons-241520 --label name.minikube.sigs.k8s.io=addons-241520 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:09:51.679826  365349 oci.go:103] Successfully created a docker volume addons-241520
	I1210 06:09:51.679938  365349 cli_runner.go:164] Run: docker run --rm --name addons-241520-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-241520 --entrypoint /usr/bin/test -v addons-241520:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:09:53.461927  365349 cli_runner.go:217] Completed: docker run --rm --name addons-241520-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-241520 --entrypoint /usr/bin/test -v addons-241520:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.781948447s)
	I1210 06:09:53.461962  365349 oci.go:107] Successfully prepared a docker volume addons-241520
	I1210 06:09:53.462014  365349 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	W1210 06:09:53.462151  365349 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 06:09:53.462259  365349 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:09:53.515716  365349 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-241520 --name addons-241520 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-241520 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-241520 --network addons-241520 --ip 192.168.49.2 --volume addons-241520:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:09:53.827138  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Running}}
	I1210 06:09:53.849570  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:09:53.870491  365349 cli_runner.go:164] Run: docker exec addons-241520 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:09:53.924827  365349 oci.go:144] the created container "addons-241520" has a running status.
	I1210 06:09:53.924858  365349 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa...
	I1210 06:09:54.683129  365349 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:09:54.703197  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:09:54.720868  365349 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:09:54.720895  365349 kic_runner.go:114] Args: [docker exec --privileged addons-241520 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:09:54.762275  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:09:54.782380  365349 machine.go:94] provisionDockerMachine start ...
	I1210 06:09:54.782485  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:54.799652  365349 main.go:143] libmachine: Using SSH client type: native
	I1210 06:09:54.799992  365349 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1210 06:09:54.800009  365349 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:09:54.800704  365349 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 06:09:57.952976  365349 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-241520
	
	I1210 06:09:57.952998  365349 ubuntu.go:182] provisioning hostname "addons-241520"
	I1210 06:09:57.953064  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:57.969971  365349 main.go:143] libmachine: Using SSH client type: native
	I1210 06:09:57.970287  365349 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1210 06:09:57.970305  365349 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-241520 && echo "addons-241520" | sudo tee /etc/hostname
	I1210 06:09:58.130625  365349 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-241520
	
	I1210 06:09:58.130717  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:58.147988  365349 main.go:143] libmachine: Using SSH client type: native
	I1210 06:09:58.148312  365349 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1210 06:09:58.148334  365349 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-241520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-241520/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-241520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:09:58.300080  365349 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:09:58.300108  365349 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-362392/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-362392/.minikube}
	I1210 06:09:58.300130  365349 ubuntu.go:190] setting up certificates
	I1210 06:09:58.300140  365349 provision.go:84] configureAuth start
	I1210 06:09:58.300202  365349 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-241520
	I1210 06:09:58.321227  365349 provision.go:143] copyHostCerts
	I1210 06:09:58.321311  365349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem (1675 bytes)
	I1210 06:09:58.321438  365349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem (1078 bytes)
	I1210 06:09:58.321505  365349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem (1123 bytes)
	I1210 06:09:58.321556  365349 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem org=jenkins.addons-241520 san=[127.0.0.1 192.168.49.2 addons-241520 localhost minikube]
	I1210 06:09:58.399449  365349 provision.go:177] copyRemoteCerts
	I1210 06:09:58.399513  365349 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:09:58.399558  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:58.419555  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:09:58.525001  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:09:58.542694  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 06:09:58.560204  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:09:58.578182  365349 provision.go:87] duration metric: took 278.027858ms to configureAuth
	I1210 06:09:58.578256  365349 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:09:58.578484  365349 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:09:58.578605  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:58.595532  365349 main.go:143] libmachine: Using SSH client type: native
	I1210 06:09:58.595854  365349 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1210 06:09:58.595877  365349 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:09:58.892851  365349 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:09:58.892932  365349 machine.go:97] duration metric: took 4.110524386s to provisionDockerMachine
	I1210 06:09:58.892949  365349 client.go:176] duration metric: took 7.874436356s to LocalClient.Create
	I1210 06:09:58.892966  365349 start.go:167] duration metric: took 7.874501875s to libmachine.API.Create "addons-241520"
	I1210 06:09:58.892974  365349 start.go:293] postStartSetup for "addons-241520" (driver="docker")
	I1210 06:09:58.892997  365349 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:09:58.893079  365349 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:09:58.893146  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:58.910813  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:09:59.017550  365349 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:09:59.020941  365349 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:09:59.020972  365349 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:09:59.020985  365349 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/addons for local assets ...
	I1210 06:09:59.021054  365349 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/files for local assets ...
	I1210 06:09:59.021082  365349 start.go:296] duration metric: took 128.102921ms for postStartSetup
	I1210 06:09:59.021428  365349 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-241520
	I1210 06:09:59.038977  365349 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/config.json ...
	I1210 06:09:59.039268  365349 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:09:59.039331  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:59.057169  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:09:59.158157  365349 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:09:59.162898  365349 start.go:128] duration metric: took 8.148339639s to createHost
	I1210 06:09:59.162966  365349 start.go:83] releasing machines lock for "addons-241520", held for 8.148530396s
	I1210 06:09:59.163055  365349 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-241520
	I1210 06:09:59.179938  365349 ssh_runner.go:195] Run: cat /version.json
	I1210 06:09:59.179959  365349 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:09:59.179987  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:59.180019  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:59.199694  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:09:59.200295  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:09:59.385403  365349 ssh_runner.go:195] Run: systemctl --version
	I1210 06:09:59.391893  365349 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:09:59.426066  365349 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:09:59.430350  365349 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:09:59.430448  365349 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:09:59.462438  365349 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 06:09:59.462474  365349 start.go:496] detecting cgroup driver to use...
	I1210 06:09:59.462511  365349 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:09:59.462565  365349 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:09:59.479807  365349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:09:59.492248  365349 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:09:59.492330  365349 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:09:59.510336  365349 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:09:59.529552  365349 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:09:59.652062  365349 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:09:59.783482  365349 docker.go:234] disabling docker service ...
	I1210 06:09:59.783547  365349 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:09:59.804975  365349 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:09:59.818744  365349 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:09:59.934522  365349 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:10:00.061730  365349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:10:00.083075  365349 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:10:00.105272  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:10:00.389445  365349 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:10:00.389537  365349 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:00.409708  365349 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:10:00.409795  365349 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:00.431770  365349 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:00.447001  365349 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:00.466326  365349 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:10:00.476995  365349 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:00.493819  365349 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:00.513448  365349 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:00.531823  365349 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:10:00.541733  365349 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:10:00.573520  365349 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:10:00.703517  365349 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:10:00.884561  365349 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:10:00.884722  365349 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:10:00.889155  365349 start.go:564] Will wait 60s for crictl version
	I1210 06:10:00.889286  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:00.893134  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:10:00.918284  365349 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:10:00.918431  365349 ssh_runner.go:195] Run: crio --version
	I1210 06:10:00.949097  365349 ssh_runner.go:195] Run: crio --version
	I1210 06:10:00.984010  365349 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 06:10:00.986972  365349 cli_runner.go:164] Run: docker network inspect addons-241520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:10:01.005570  365349 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:10:01.009681  365349 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:10:01.019949  365349 kubeadm.go:884] updating cluster {Name:addons-241520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-241520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:10:01.020138  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:10:01.169844  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:10:01.332453  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:10:01.484763  365349 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:10:01.484844  365349 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:10:01.511833  365349 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1210 06:10:01.511861  365349 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 06:10:01.511906  365349 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:01.511931  365349 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:01.512119  365349 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 06:10:01.512141  365349 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:01.512208  365349 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:01.512231  365349 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:01.512300  365349 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:01.512119  365349 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:01.514436  365349 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:01.514918  365349 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:01.515098  365349 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:01.515250  365349 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:01.515398  365349 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:01.515541  365349 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 06:10:01.515683  365349 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:01.516018  365349 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:01.861481  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:01.870674  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:01.890174  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:01.906149  365349 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1210 06:10:01.906239  365349 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:01.906334  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:01.920144  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 06:10:01.920391  365349 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1210 06:10:01.920451  365349 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:01.920493  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:01.950841  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:01.954960  365349 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162" in container runtime
	I1210 06:10:01.955154  365349 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:01.955202  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:01.955222  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:01.959053  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:01.978135  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:01.978297  365349 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 06:10:01.978363  365349 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 06:10:01.978411  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:02.009672  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:02.025954  365349 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896" in container runtime
	I1210 06:10:02.025996  365349 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:02.026050  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:02.026119  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:02.042262  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:02.067464  365349 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22" in container runtime
	I1210 06:10:02.067558  365349 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:02.067643  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:02.072703  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:10:02.072886  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:02.097345  365349 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6" in container runtime
	I1210 06:10:02.097433  365349 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:02.097516  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:02.117315  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:02.117470  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:02.149864  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:02.149957  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:02.152849  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:02.152917  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:10:02.152975  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:02.204217  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:02.204325  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:02.273146  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:02.273263  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1210 06:10:02.273570  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 06:10:02.273326  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:10:02.273349  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1210 06:10:02.273384  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:02.273805  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 06:10:02.297134  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3
	I1210 06:10:02.297488  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 06:10:02.297390  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:02.349137  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 06:10:02.349238  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:02.349245  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1210 06:10:02.349314  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 06:10:02.349335  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1210 06:10:02.349386  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 06:10:02.349463  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 06:10:02.349517  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:02.349543  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3
	I1210 06:10:02.349594  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 06:10:02.349663  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 06:10:02.349711  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (22806528 bytes)
	I1210 06:10:02.434699  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3
	I1210 06:10:02.434814  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 06:10:02.434881  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 06:10:02.434912  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 06:10:02.434953  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 06:10:02.434963  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (24578048 bytes)
	I1210 06:10:02.435025  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3
	I1210 06:10:02.435079  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 06:10:02.514301  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 06:10:02.514389  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (20730880 bytes)
	I1210 06:10:02.523374  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 06:10:02.523414  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (15787008 bytes)
	I1210 06:10:02.568890  365349 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 06:10:02.569020  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1210 06:10:02.744084  365349 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 06:10:02.744309  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:03.073173  365349 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 06:10:03.073322  365349 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:03.073276  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1210 06:10:03.073437  365349 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 06:10:03.073465  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 06:10:03.073544  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:04.551583  365349 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3: (1.478095905s)
	I1210 06:10:04.551620  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 from cache
	I1210 06:10:04.551640  365349 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 06:10:04.551690  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1210 06:10:04.551715  365349 ssh_runner.go:235] Completed: which crictl: (1.478139671s)
	I1210 06:10:04.551790  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:06.152515  365349 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.600801751s)
	I1210 06:10:06.152546  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1210 06:10:06.152549  365349 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.600742434s)
	I1210 06:10:06.152564  365349 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 06:10:06.152615  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1210 06:10:06.152615  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:07.914056  365349 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.76135325s)
	I1210 06:10:07.914136  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:07.914154  365349 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.761523075s)
	I1210 06:10:07.914173  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1210 06:10:07.914191  365349 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 06:10:07.914227  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 06:10:09.232851  365349 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3: (1.318602981s)
	I1210 06:10:09.232880  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 from cache
	I1210 06:10:09.232898  365349 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 06:10:09.232946  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 06:10:09.233015  365349 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.318869153s)
	I1210 06:10:09.233043  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 06:10:09.233110  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:10:10.399126  365349 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3: (1.16615352s)
	I1210 06:10:10.399156  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 from cache
	I1210 06:10:10.399174  365349 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 06:10:10.399223  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 06:10:10.399292  365349 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.166173385s)
	I1210 06:10:10.399311  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 06:10:10.399327  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 06:10:11.814179  365349 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3: (1.414929692s)
	I1210 06:10:11.814209  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 from cache
	I1210 06:10:11.814231  365349 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:10:11.814313  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:10:12.381863  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 06:10:12.381910  365349 cache_images.go:125] Successfully loaded all cached images
	I1210 06:10:12.381917  365349 cache_images.go:94] duration metric: took 10.870040774s to LoadCachedImages
	I1210 06:10:12.381929  365349 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1210 06:10:12.382035  365349 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-241520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-241520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:10:12.382118  365349 ssh_runner.go:195] Run: crio config
	I1210 06:10:12.435192  365349 cni.go:84] Creating CNI manager for ""
	I1210 06:10:12.435310  365349 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:10:12.435337  365349 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:10:12.435362  365349 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-241520 NodeName:addons-241520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:10:12.435504  365349 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-241520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:10:12.435578  365349 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 06:10:12.443806  365349 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 06:10:12.443923  365349 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 06:10:12.452382  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl.sha256
	I1210 06:10:12.452462  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubelet.sha256
	I1210 06:10:12.452489  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 06:10:12.452557  365349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:10:12.452580  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:10:12.452635  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 06:10:12.467662  365349 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 06:10:12.467697  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 06:10:12.467722  365349 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 06:10:12.467732  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/linux/arm64/v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (71434424 bytes)
	I1210 06:10:12.467699  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/linux/arm64/v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (58130616 bytes)
	I1210 06:10:12.479264  365349 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 06:10:12.479346  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/linux/arm64/v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (56426788 bytes)
	I1210 06:10:13.342341  365349 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:10:13.351793  365349 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 06:10:13.366182  365349 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:10:13.379953  365349 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1210 06:10:13.393705  365349 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:10:13.398099  365349 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:10:13.408916  365349 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:10:13.536069  365349 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:10:13.556613  365349 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520 for IP: 192.168.49.2
	I1210 06:10:13.556638  365349 certs.go:195] generating shared ca certs ...
	I1210 06:10:13.556655  365349 certs.go:227] acquiring lock for ca certs: {Name:mk6fdc2cadbd147112797261b2432f4e1e90b685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:13.556797  365349 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key
	I1210 06:10:13.665642  365349 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt ...
	I1210 06:10:13.665679  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt: {Name:mk3294ca51bc393d6eb474de2127d23ebdb0e000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:13.665919  365349 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key ...
	I1210 06:10:13.665935  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key: {Name:mk3cbf7d8e863061adcb732ebb1f3925124a7d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:13.666024  365349 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key
	I1210 06:10:13.749667  365349 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt ...
	I1210 06:10:13.749713  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt: {Name:mka2a0678c24a34aafc71fb5a32c865f44d9d83b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:13.749918  365349 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key ...
	I1210 06:10:13.749938  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key: {Name:mk6612b7518e0a3b98473aa40d584b0ef31fbdf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:13.750028  365349 certs.go:257] generating profile certs ...
	I1210 06:10:13.750101  365349 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.key
	I1210 06:10:13.750119  365349 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt with IP's: []
	I1210 06:10:14.046427  365349 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt ...
	I1210 06:10:14.046470  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: {Name:mkf4a9c5f2c3da2d57ca27617d7315b5ace6f2a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:14.047463  365349 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.key ...
	I1210 06:10:14.047488  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.key: {Name:mkcadb65cf72cf66fd89d84d0da6d0e60d07aac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:14.047603  365349 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.key.e0455d5b
	I1210 06:10:14.047639  365349 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.crt.e0455d5b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 06:10:14.210195  365349 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.crt.e0455d5b ...
	I1210 06:10:14.210229  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.crt.e0455d5b: {Name:mkbfd561a6d0bb0ea4b99987ccb5a76507ecca44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:14.210414  365349 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.key.e0455d5b ...
	I1210 06:10:14.210431  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.key.e0455d5b: {Name:mkc057dfe18133c542ce4563bbd25ef24d5185d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:14.210520  365349 certs.go:382] copying /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.crt.e0455d5b -> /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.crt
	I1210 06:10:14.210599  365349 certs.go:386] copying /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.key.e0455d5b -> /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.key
	I1210 06:10:14.210652  365349 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.key
	I1210 06:10:14.210672  365349 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.crt with IP's: []
	I1210 06:10:14.348361  365349 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.crt ...
	I1210 06:10:14.348393  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.crt: {Name:mk264999c5af78ee55216c281016f59845db8bbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:14.348574  365349 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.key ...
	I1210 06:10:14.348589  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.key: {Name:mk87f9211b6cb9f59ff85aeea12277e09be68862 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:14.348783  365349 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:10:14.348830  365349 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:10:14.348865  365349 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:10:14.348897  365349 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem (1675 bytes)
	I1210 06:10:14.349503  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:10:14.368204  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:10:14.392042  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:10:14.412581  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:10:14.436763  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 06:10:14.456697  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:10:14.475852  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:10:14.494586  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:10:14.513414  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:10:14.531979  365349 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:10:14.545986  365349 ssh_runner.go:195] Run: openssl version
	I1210 06:10:14.552779  365349 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:14.560783  365349 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:10:14.568878  365349 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:14.572974  365349 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:14.573059  365349 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:14.614440  365349 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:10:14.622205  365349 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:10:14.630060  365349 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:10:14.633841  365349 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:10:14.633893  365349 kubeadm.go:401] StartCluster: {Name:addons-241520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-241520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:10:14.633971  365349 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:10:14.634032  365349 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:10:14.660845  365349 cri.go:89] found id: ""
	I1210 06:10:14.660923  365349 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:10:14.669263  365349 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:10:14.677560  365349 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:10:14.677650  365349 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:10:14.685660  365349 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:10:14.685725  365349 kubeadm.go:158] found existing configuration files:
	
	I1210 06:10:14.685803  365349 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:10:14.693864  365349 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:10:14.693978  365349 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:10:14.701830  365349 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:10:14.710282  365349 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:10:14.710378  365349 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:10:14.718116  365349 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:10:14.726323  365349 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:10:14.726390  365349 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:10:14.734143  365349 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:10:14.742355  365349 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:10:14.742432  365349 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:10:14.750452  365349 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:10:14.814533  365349 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 06:10:14.814812  365349 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:10:14.885462  365349 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:10:31.984556  365349 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 06:10:31.984617  365349 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:10:31.984709  365349 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:10:31.984768  365349 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:10:31.984807  365349 kubeadm.go:319] OS: Linux
	I1210 06:10:31.984856  365349 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:10:31.984909  365349 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:10:31.984960  365349 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:10:31.985018  365349 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:10:31.985073  365349 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:10:31.985126  365349 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:10:31.985176  365349 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:10:31.985239  365349 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:10:31.985291  365349 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:10:31.985369  365349 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:10:31.985468  365349 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:10:31.985562  365349 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:10:31.985628  365349 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:10:31.988538  365349 out.go:252]   - Generating certificates and keys ...
	I1210 06:10:31.988664  365349 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:10:31.988746  365349 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:10:31.988844  365349 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:10:31.988932  365349 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:10:31.989026  365349 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:10:31.989121  365349 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:10:31.989218  365349 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:10:31.989360  365349 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-241520 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 06:10:31.989450  365349 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:10:31.989596  365349 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-241520 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 06:10:31.989673  365349 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:10:31.989742  365349 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:10:31.989793  365349 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:10:31.989852  365349 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:10:31.989911  365349 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:10:31.990000  365349 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:10:31.990083  365349 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:10:31.990157  365349 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:10:31.990213  365349 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:10:31.990337  365349 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:10:31.990436  365349 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:10:31.993447  365349 out.go:252]   - Booting up control plane ...
	I1210 06:10:31.993595  365349 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:10:31.993736  365349 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:10:31.993821  365349 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:10:31.993927  365349 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:10:31.994020  365349 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:10:31.994182  365349 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:10:31.994319  365349 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:10:31.994364  365349 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:10:31.994506  365349 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:10:31.994614  365349 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:10:31.994672  365349 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003879143s
	I1210 06:10:31.994843  365349 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:10:31.994949  365349 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1210 06:10:31.995048  365349 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:10:31.995138  365349 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 06:10:31.995224  365349 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.047564519s
	I1210 06:10:31.995299  365349 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.798266576s
	I1210 06:10:31.995375  365349 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502206802s
	I1210 06:10:31.995493  365349 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 06:10:31.995632  365349 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 06:10:31.995699  365349 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 06:10:31.995910  365349 kubeadm.go:319] [mark-control-plane] Marking the node addons-241520 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 06:10:31.995973  365349 kubeadm.go:319] [bootstrap-token] Using token: zcli1o.7gec4ombe4uo3w4h
	I1210 06:10:31.999172  365349 out.go:252]   - Configuring RBAC rules ...
	I1210 06:10:31.999482  365349 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 06:10:31.999570  365349 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 06:10:31.999727  365349 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 06:10:31.999855  365349 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 06:10:31.999975  365349 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 06:10:32.000061  365349 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 06:10:32.000176  365349 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 06:10:32.000227  365349 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 06:10:32.000278  365349 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 06:10:32.000282  365349 kubeadm.go:319] 
	I1210 06:10:32.000342  365349 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 06:10:32.000364  365349 kubeadm.go:319] 
	I1210 06:10:32.000446  365349 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 06:10:32.000449  365349 kubeadm.go:319] 
	I1210 06:10:32.000487  365349 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 06:10:32.000553  365349 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 06:10:32.000608  365349 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 06:10:32.000614  365349 kubeadm.go:319] 
	I1210 06:10:32.000681  365349 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 06:10:32.000685  365349 kubeadm.go:319] 
	I1210 06:10:32.000746  365349 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 06:10:32.000750  365349 kubeadm.go:319] 
	I1210 06:10:32.000807  365349 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 06:10:32.000890  365349 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 06:10:32.000959  365349 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 06:10:32.000971  365349 kubeadm.go:319] 
	I1210 06:10:32.001111  365349 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 06:10:32.001433  365349 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 06:10:32.001442  365349 kubeadm.go:319] 
	I1210 06:10:32.001551  365349 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zcli1o.7gec4ombe4uo3w4h \
	I1210 06:10:32.001685  365349 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:51315b15cc463daae0db99738888dd9b68c1a2544d5ab5bde8f25324b73b939c \
	I1210 06:10:32.001707  365349 kubeadm.go:319] 	--control-plane 
	I1210 06:10:32.001711  365349 kubeadm.go:319] 
	I1210 06:10:32.001824  365349 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 06:10:32.001829  365349 kubeadm.go:319] 
	I1210 06:10:32.001935  365349 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zcli1o.7gec4ombe4uo3w4h \
	I1210 06:10:32.002075  365349 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:51315b15cc463daae0db99738888dd9b68c1a2544d5ab5bde8f25324b73b939c 
	I1210 06:10:32.002096  365349 cni.go:84] Creating CNI manager for ""
	I1210 06:10:32.002105  365349 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:10:32.007238  365349 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 06:10:32.010414  365349 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 06:10:32.015634  365349 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1210 06:10:32.015659  365349 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 06:10:32.033603  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 06:10:32.340500  365349 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 06:10:32.340643  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:32.340722  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-241520 minikube.k8s.io/updated_at=2025_12_10T06_10_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=addons-241520 minikube.k8s.io/primary=true
	I1210 06:10:32.535856  365349 ops.go:34] apiserver oom_adj: -16
	I1210 06:10:32.536101  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:33.036878  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:33.536472  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:34.037072  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:34.536549  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:35.036155  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:35.536572  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:36.036702  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:36.536822  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:37.036648  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:37.536933  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:37.641862  365349 kubeadm.go:1114] duration metric: took 5.301271459s to wait for elevateKubeSystemPrivileges
	I1210 06:10:37.641902  365349 kubeadm.go:403] duration metric: took 23.008014399s to StartCluster
	I1210 06:10:37.641919  365349 settings.go:142] acquiring lock: {Name:mk50f3096e55008942432e93c6cf4976a70e957e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:37.642044  365349 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:10:37.642464  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/kubeconfig: {Name:mk64e356b1eb31ef8982d6b594101aff5f90a6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:37.642655  365349 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 06:10:37.642680  365349 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:10:37.642919  365349 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:37.642951  365349 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 06:10:37.643039  365349 addons.go:70] Setting yakd=true in profile "addons-241520"
	I1210 06:10:37.643053  365349 addons.go:239] Setting addon yakd=true in "addons-241520"
	I1210 06:10:37.643074  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.643539  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.643984  365349 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-241520"
	I1210 06:10:37.644009  365349 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-241520"
	I1210 06:10:37.644025  365349 addons.go:70] Setting metrics-server=true in profile "addons-241520"
	I1210 06:10:37.644040  365349 addons.go:239] Setting addon metrics-server=true in "addons-241520"
	I1210 06:10:37.644035  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.644060  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.644474  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.644499  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.645075  365349 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-241520"
	I1210 06:10:37.645104  365349 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-241520"
	I1210 06:10:37.645132  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.645596  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.646961  365349 addons.go:70] Setting cloud-spanner=true in profile "addons-241520"
	I1210 06:10:37.646993  365349 addons.go:239] Setting addon cloud-spanner=true in "addons-241520"
	I1210 06:10:37.647036  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.647485  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.650015  365349 addons.go:70] Setting registry=true in profile "addons-241520"
	I1210 06:10:37.650053  365349 addons.go:239] Setting addon registry=true in "addons-241520"
	I1210 06:10:37.650095  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.650585  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.657448  365349 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-241520"
	I1210 06:10:37.657518  365349 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-241520"
	I1210 06:10:37.657551  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.658022  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.659296  365349 addons.go:70] Setting registry-creds=true in profile "addons-241520"
	I1210 06:10:37.659332  365349 addons.go:239] Setting addon registry-creds=true in "addons-241520"
	I1210 06:10:37.659381  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.659885  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.677862  365349 addons.go:70] Setting default-storageclass=true in profile "addons-241520"
	I1210 06:10:37.677884  365349 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-241520"
	I1210 06:10:37.677899  365349 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-241520"
	I1210 06:10:37.677907  365349 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-241520"
	I1210 06:10:37.678274  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.689461  365349 addons.go:70] Setting volcano=true in profile "addons-241520"
	I1210 06:10:37.689500  365349 addons.go:239] Setting addon volcano=true in "addons-241520"
	I1210 06:10:37.689542  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.677862  365349 addons.go:70] Setting storage-provisioner=true in profile "addons-241520"
	I1210 06:10:37.689824  365349 addons.go:239] Setting addon storage-provisioner=true in "addons-241520"
	I1210 06:10:37.689853  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.690060  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.690273  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.709527  365349 addons.go:70] Setting gcp-auth=true in profile "addons-241520"
	I1210 06:10:37.709566  365349 mustload.go:66] Loading cluster: addons-241520
	I1210 06:10:37.709766  365349 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:37.710023  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.717371  365349 addons.go:70] Setting volumesnapshots=true in profile "addons-241520"
	I1210 06:10:37.717407  365349 addons.go:239] Setting addon volumesnapshots=true in "addons-241520"
	I1210 06:10:37.717444  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.718367  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.725441  365349 addons.go:70] Setting ingress=true in profile "addons-241520"
	I1210 06:10:37.725475  365349 addons.go:239] Setting addon ingress=true in "addons-241520"
	I1210 06:10:37.725524  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.726004  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.743854  365349 addons.go:70] Setting ingress-dns=true in profile "addons-241520"
	I1210 06:10:37.743906  365349 addons.go:239] Setting addon ingress-dns=true in "addons-241520"
	I1210 06:10:37.743953  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.745451  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.763905  365349 out.go:179] * Verifying Kubernetes components...
	I1210 06:10:37.764546  365349 addons.go:70] Setting inspektor-gadget=true in profile "addons-241520"
	I1210 06:10:37.764583  365349 addons.go:239] Setting addon inspektor-gadget=true in "addons-241520"
	I1210 06:10:37.764623  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.765163  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.813535  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.882099  365349 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 06:10:37.893824  365349 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 06:10:37.893901  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 06:10:37.893986  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:37.937158  365349 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:10:37.942449  365349 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 06:10:37.949073  365349 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1210 06:10:37.949257  365349 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 06:10:37.949285  365349 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	W1210 06:10:37.951252  365349 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 06:10:37.952018  365349 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 06:10:37.952038  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 06:10:37.952107  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:37.971908  365349 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 06:10:37.974695  365349 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 06:10:37.974746  365349 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 06:10:37.974930  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.007859  365349 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 06:10:38.012326  365349 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 06:10:38.015332  365349 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 06:10:38.015359  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 06:10:38.015435  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.017666  365349 addons.go:239] Setting addon default-storageclass=true in "addons-241520"
	I1210 06:10:38.017719  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:38.018174  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:38.025739  365349 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-241520"
	I1210 06:10:38.025793  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:38.026257  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:38.038668  365349 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:38.038789  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:38.048532  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.055515  365349 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 06:10:38.055727  365349 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:10:38.055751  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:10:38.055828  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.074533  365349 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 06:10:38.081532  365349 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 06:10:38.081557  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 06:10:38.081629  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.091698  365349 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 06:10:38.097598  365349 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 06:10:38.097633  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 06:10:38.097706  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.100451  365349 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 06:10:38.100471  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 06:10:38.101142  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.128598  365349 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 06:10:38.131861  365349 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 06:10:38.134805  365349 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 06:10:38.141519  365349 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 06:10:38.155216  365349 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 06:10:38.155247  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 06:10:38.155311  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.168736  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 06:10:38.169242  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 06:10:38.171538  365349 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 06:10:38.171564  365349 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 06:10:38.171641  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.183411  365349 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 06:10:38.183434  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 06:10:38.183499  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.183735  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.186769  365349 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 06:10:38.187179  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.188804  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.208165  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 06:10:38.208408  365349 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 06:10:38.210242  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.253109  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 06:10:38.253406  365349 out.go:179]   - Using image docker.io/busybox:stable
	I1210 06:10:38.257784  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.272973  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 06:10:38.273255  365349 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 06:10:38.273313  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 06:10:38.273419  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.273826  365349 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:10:38.273838  365349 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:10:38.273883  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.299150  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 06:10:38.302234  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 06:10:38.307558  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 06:10:38.307734  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.309169  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.321361  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 06:10:38.324313  365349 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 06:10:38.324381  365349 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 06:10:38.324476  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.339689  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.345510  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.347953  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.362056  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.369316  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.402911  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.403792  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.415159  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.517108  365349 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:10:39.124671  365349 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 06:10:39.124695  365349 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 06:10:39.155336  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 06:10:39.179860  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 06:10:39.222365  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 06:10:39.251814  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:10:39.351925  365349 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 06:10:39.351998  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 06:10:39.410223  365349 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 06:10:39.410301  365349 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 06:10:39.495067  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 06:10:39.512311  365349 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 06:10:39.512392  365349 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 06:10:39.522663  365349 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 06:10:39.522728  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 06:10:39.544082  365349 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 06:10:39.544156  365349 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 06:10:39.583379  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 06:10:39.591230  365349 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 06:10:39.591305  365349 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 06:10:39.598388  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 06:10:39.607792  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 06:10:39.609829  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:10:39.615254  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 06:10:39.758701  365349 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 06:10:39.758782  365349 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 06:10:39.786983  365349 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 06:10:39.787063  365349 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 06:10:39.789750  365349 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 06:10:39.789824  365349 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 06:10:39.873896  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 06:10:39.902165  365349 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 06:10:39.902240  365349 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 06:10:40.073041  365349 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 06:10:40.073157  365349 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 06:10:40.092189  365349 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 06:10:40.092274  365349 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 06:10:40.106009  365349 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 06:10:40.106040  365349 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 06:10:40.305628  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 06:10:40.335931  365349 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 06:10:40.336006  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 06:10:40.470830  365349 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 06:10:40.470904  365349 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 06:10:40.520189  365349 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 06:10:40.520269  365349 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 06:10:40.598607  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 06:10:40.808509  365349 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 06:10:40.808603  365349 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 06:10:40.814558  365349 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 06:10:40.814633  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 06:10:40.924102  365349 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 06:10:40.924177  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 06:10:41.015328  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 06:10:41.223537  365349 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 06:10:41.223615  365349 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 06:10:41.711331  365349 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 06:10:41.711405  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 06:10:41.742912  365349 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.225759413s)
	I1210 06:10:41.743777  365349 node_ready.go:35] waiting up to 6m0s for node "addons-241520" to be "Ready" ...
	I1210 06:10:41.744084  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.588672281s)
	I1210 06:10:41.744218  365349 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.557409573s)
	I1210 06:10:41.744256  365349 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1210 06:10:42.049923  365349 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 06:10:42.049993  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 06:10:42.253360  365349 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-241520" context rescaled to 1 replicas
	I1210 06:10:42.361083  365349 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 06:10:42.361171  365349 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 06:10:42.609558  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.357667857s)
	I1210 06:10:42.609887  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.114744908s)
	I1210 06:10:42.609957  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.387068834s)
	I1210 06:10:42.610029  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.430098764s)
	I1210 06:10:42.668366  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1210 06:10:43.774900  365349 node_ready.go:57] node "addons-241520" has "Ready":"False" status (will retry)
	I1210 06:10:44.824528  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.241059969s)
	I1210 06:10:45.674835  365349 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 06:10:45.674915  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:45.701892  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:45.824867  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.226391271s)
	I1210 06:10:45.824896  365349 addons.go:495] Verifying addon ingress=true in "addons-241520"
	I1210 06:10:45.825168  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.217300105s)
	I1210 06:10:45.825392  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.215494067s)
	I1210 06:10:45.825451  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.210124624s)
	I1210 06:10:45.825475  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.951507291s)
	I1210 06:10:45.825876  365349 addons.go:495] Verifying addon registry=true in "addons-241520"
	I1210 06:10:45.825552  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.519899856s)
	I1210 06:10:45.826347  365349 addons.go:495] Verifying addon metrics-server=true in "addons-241520"
	I1210 06:10:45.825580  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.226910556s)
	I1210 06:10:45.825648  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.810240776s)
	W1210 06:10:45.827421  365349 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 06:10:45.827448  365349 retry.go:31] will retry after 352.689924ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 06:10:45.828315  365349 out.go:179] * Verifying registry addon...
	I1210 06:10:45.828363  365349 out.go:179] * Verifying ingress addon...
	I1210 06:10:45.830466  365349 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-241520 service yakd-dashboard -n yakd-dashboard
	
	I1210 06:10:45.833386  365349 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 06:10:45.834397  365349 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1210 06:10:45.835384  365349 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 06:10:45.852095  365349 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 06:10:45.852117  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:45.852297  365349 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 06:10:45.852304  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:45.860524  365349 addons.go:239] Setting addon gcp-auth=true in "addons-241520"
	I1210 06:10:45.860625  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:45.861180  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:45.883483  365349 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 06:10:45.883539  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:45.903370  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:46.180664  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 06:10:46.203321  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.534843962s)
	I1210 06:10:46.203360  365349 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-241520"
	I1210 06:10:46.206707  365349 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 06:10:46.206713  365349 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 06:10:46.209697  365349 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 06:10:46.210471  365349 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 06:10:46.212742  365349 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 06:10:46.212777  365349 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 06:10:46.218830  365349 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 06:10:46.218850  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 06:10:46.249976  365349 node_ready.go:57] node "addons-241520" has "Ready":"False" status (will retry)
	I1210 06:10:46.262517  365349 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 06:10:46.262540  365349 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 06:10:46.280181  365349 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 06:10:46.280200  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 06:10:46.304120  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 06:10:46.339585  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:46.340081  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:46.715087  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:46.840287  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:46.840651  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:47.214168  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:47.336741  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:47.338387  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:47.713415  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:47.837934  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:47.838589  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:48.213910  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:48.336677  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:48.338077  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:48.720958  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 06:10:48.748311  365349 node_ready.go:57] node "addons-241520" has "Ready":"False" status (will retry)
	I1210 06:10:48.838462  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:48.839337  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:48.986925  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.806209274s)
	I1210 06:10:48.987010  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.682821381s)
	I1210 06:10:48.990089  365349 addons.go:495] Verifying addon gcp-auth=true in "addons-241520"
	I1210 06:10:48.993152  365349 out.go:179] * Verifying gcp-auth addon...
	I1210 06:10:48.996827  365349 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 06:10:49.000048  365349 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 06:10:49.000080  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:49.214583  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:49.336606  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:49.338138  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:49.499832  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:49.713902  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:49.838633  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:49.839278  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:50.001697  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:50.214288  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:50.337898  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:50.338374  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:50.500392  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:50.714815  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:50.837548  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:50.837659  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:51.003065  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:51.214255  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 06:10:51.247104  365349 node_ready.go:57] node "addons-241520" has "Ready":"False" status (will retry)
	I1210 06:10:51.337136  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:51.337386  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:51.500357  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:51.713441  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:51.837216  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:51.837645  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:52.008518  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:52.213538  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:52.337303  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:52.337440  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:52.500922  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:52.714341  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:52.838370  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:52.838543  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:53.003150  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:53.214388  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 06:10:53.247274  365349 node_ready.go:57] node "addons-241520" has "Ready":"False" status (will retry)
	I1210 06:10:53.336974  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:53.337722  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:53.500790  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:53.713660  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:53.836809  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:53.837518  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:54.074036  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:54.223806  365349 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 06:10:54.223831  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:54.266275  365349 node_ready.go:49] node "addons-241520" is "Ready"
	I1210 06:10:54.266306  365349 node_ready.go:38] duration metric: took 12.52246546s for node "addons-241520" to be "Ready" ...
	I1210 06:10:54.266320  365349 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:10:54.266379  365349 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:10:54.288568  365349 api_server.go:72] duration metric: took 16.645858708s to wait for apiserver process to appear ...
	I1210 06:10:54.288596  365349 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:10:54.288616  365349 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1210 06:10:54.301862  365349 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1210 06:10:54.305242  365349 api_server.go:141] control plane version: v1.34.3
	I1210 06:10:54.305274  365349 api_server.go:131] duration metric: took 16.670374ms to wait for apiserver health ...
	I1210 06:10:54.305284  365349 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:10:54.373969  365349 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 06:10:54.373998  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:54.374479  365349 system_pods.go:59] 19 kube-system pods found
	I1210 06:10:54.374518  365349 system_pods.go:61] "coredns-66bc5c9577-ds7m5" [4ab8c833-9e84-4a9c-a5a3-00db1cac3a38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:10:54.374533  365349 system_pods.go:61] "csi-hostpath-attacher-0" [51883b17-568a-4e55-98af-0e05b0b82a8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 06:10:54.374544  365349 system_pods.go:61] "csi-hostpath-resizer-0" [1e3ff8b8-58b0-4a82-a161-424bf106360a] Pending
	I1210 06:10:54.374549  365349 system_pods.go:61] "csi-hostpathplugin-qf6mx" [b90691c5-fbe5-45ea-b6af-52ae35765477] Pending
	I1210 06:10:54.374554  365349 system_pods.go:61] "etcd-addons-241520" [267e51ba-ce7e-4d65-9431-6d8524f1085b] Running
	I1210 06:10:54.374558  365349 system_pods.go:61] "kindnet-h9tr4" [b06a5696-c044-44ec-b102-c34cbf0b480b] Running
	I1210 06:10:54.374562  365349 system_pods.go:61] "kube-apiserver-addons-241520" [6bead7dc-7534-41e5-a5cb-dfb46451b561] Running
	I1210 06:10:54.374569  365349 system_pods.go:61] "kube-controller-manager-addons-241520" [5a837375-b97c-40af-b465-7d1ea919beac] Running
	I1210 06:10:54.374573  365349 system_pods.go:61] "kube-ingress-dns-minikube" [7543ba21-d18b-4d73-9940-71a8dfb241c0] Pending
	I1210 06:10:54.374578  365349 system_pods.go:61] "kube-proxy-srgdx" [7a04de78-15a5-4a0f-b6f4-b7cef4acc511] Running
	I1210 06:10:54.374582  365349 system_pods.go:61] "kube-scheduler-addons-241520" [e176a48a-a983-4774-add2-87513e2748ee] Running
	I1210 06:10:54.374593  365349 system_pods.go:61] "metrics-server-85b7d694d7-rwcgk" [c0c02e94-6958-4eea-9eaa-a67bdb5a9e76] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:10:54.374597  365349 system_pods.go:61] "nvidia-device-plugin-daemonset-qbztj" [05046e52-8283-4767-9fb0-3ea35511d095] Pending
	I1210 06:10:54.374607  365349 system_pods.go:61] "registry-6b586f9694-jv6bp" [98fabc13-aaf3-44ca-bcbd-2817949dcb72] Pending
	I1210 06:10:54.374613  365349 system_pods.go:61] "registry-creds-764b6fb674-xsqwd" [bb826bb4-0c23-4177-9e45-94b0b2c61e46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 06:10:54.374617  365349 system_pods.go:61] "registry-proxy-pfbv5" [66beaf69-9d47-4b33-bba2-32ca2a101bcc] Pending
	I1210 06:10:54.374627  365349 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8pzhn" [a9c3f52c-7727-41a9-841c-c1fae4ed4636] Pending
	I1210 06:10:54.374631  365349 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qgnvq" [ba2995c6-cea0-497b-bd27-d8c59ebe811d] Pending
	I1210 06:10:54.374636  365349 system_pods.go:61] "storage-provisioner" [a864cb16-738a-4b0e-b0db-65485b264f6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:10:54.374643  365349 system_pods.go:74] duration metric: took 69.352988ms to wait for pod list to return data ...
	I1210 06:10:54.374656  365349 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:10:54.374936  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:54.500804  365349 default_sa.go:45] found service account: "default"
	I1210 06:10:54.500833  365349 default_sa.go:55] duration metric: took 126.170222ms for default service account to be created ...
	I1210 06:10:54.500844  365349 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:10:54.519999  365349 system_pods.go:86] 19 kube-system pods found
	I1210 06:10:54.520037  365349 system_pods.go:89] "coredns-66bc5c9577-ds7m5" [4ab8c833-9e84-4a9c-a5a3-00db1cac3a38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:10:54.520047  365349 system_pods.go:89] "csi-hostpath-attacher-0" [51883b17-568a-4e55-98af-0e05b0b82a8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 06:10:54.520052  365349 system_pods.go:89] "csi-hostpath-resizer-0" [1e3ff8b8-58b0-4a82-a161-424bf106360a] Pending
	I1210 06:10:54.520057  365349 system_pods.go:89] "csi-hostpathplugin-qf6mx" [b90691c5-fbe5-45ea-b6af-52ae35765477] Pending
	I1210 06:10:54.520061  365349 system_pods.go:89] "etcd-addons-241520" [267e51ba-ce7e-4d65-9431-6d8524f1085b] Running
	I1210 06:10:54.520065  365349 system_pods.go:89] "kindnet-h9tr4" [b06a5696-c044-44ec-b102-c34cbf0b480b] Running
	I1210 06:10:54.520069  365349 system_pods.go:89] "kube-apiserver-addons-241520" [6bead7dc-7534-41e5-a5cb-dfb46451b561] Running
	I1210 06:10:54.520078  365349 system_pods.go:89] "kube-controller-manager-addons-241520" [5a837375-b97c-40af-b465-7d1ea919beac] Running
	I1210 06:10:54.520082  365349 system_pods.go:89] "kube-ingress-dns-minikube" [7543ba21-d18b-4d73-9940-71a8dfb241c0] Pending
	I1210 06:10:54.520086  365349 system_pods.go:89] "kube-proxy-srgdx" [7a04de78-15a5-4a0f-b6f4-b7cef4acc511] Running
	I1210 06:10:54.520090  365349 system_pods.go:89] "kube-scheduler-addons-241520" [e176a48a-a983-4774-add2-87513e2748ee] Running
	I1210 06:10:54.520101  365349 system_pods.go:89] "metrics-server-85b7d694d7-rwcgk" [c0c02e94-6958-4eea-9eaa-a67bdb5a9e76] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:10:54.520105  365349 system_pods.go:89] "nvidia-device-plugin-daemonset-qbztj" [05046e52-8283-4767-9fb0-3ea35511d095] Pending
	I1210 06:10:54.520127  365349 system_pods.go:89] "registry-6b586f9694-jv6bp" [98fabc13-aaf3-44ca-bcbd-2817949dcb72] Pending
	I1210 06:10:54.520139  365349 system_pods.go:89] "registry-creds-764b6fb674-xsqwd" [bb826bb4-0c23-4177-9e45-94b0b2c61e46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 06:10:54.520144  365349 system_pods.go:89] "registry-proxy-pfbv5" [66beaf69-9d47-4b33-bba2-32ca2a101bcc] Pending
	I1210 06:10:54.520150  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8pzhn" [a9c3f52c-7727-41a9-841c-c1fae4ed4636] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:54.520160  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qgnvq" [ba2995c6-cea0-497b-bd27-d8c59ebe811d] Pending
	I1210 06:10:54.520165  365349 system_pods.go:89] "storage-provisioner" [a864cb16-738a-4b0e-b0db-65485b264f6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:10:54.520179  365349 retry.go:31] will retry after 244.689844ms: missing components: kube-dns
	I1210 06:10:54.535689  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:54.729897  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:54.784354  365349 system_pods.go:86] 19 kube-system pods found
	I1210 06:10:54.784398  365349 system_pods.go:89] "coredns-66bc5c9577-ds7m5" [4ab8c833-9e84-4a9c-a5a3-00db1cac3a38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:10:54.784408  365349 system_pods.go:89] "csi-hostpath-attacher-0" [51883b17-568a-4e55-98af-0e05b0b82a8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 06:10:54.784416  365349 system_pods.go:89] "csi-hostpath-resizer-0" [1e3ff8b8-58b0-4a82-a161-424bf106360a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 06:10:54.784424  365349 system_pods.go:89] "csi-hostpathplugin-qf6mx" [b90691c5-fbe5-45ea-b6af-52ae35765477] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 06:10:54.784433  365349 system_pods.go:89] "etcd-addons-241520" [267e51ba-ce7e-4d65-9431-6d8524f1085b] Running
	I1210 06:10:54.784438  365349 system_pods.go:89] "kindnet-h9tr4" [b06a5696-c044-44ec-b102-c34cbf0b480b] Running
	I1210 06:10:54.784447  365349 system_pods.go:89] "kube-apiserver-addons-241520" [6bead7dc-7534-41e5-a5cb-dfb46451b561] Running
	I1210 06:10:54.784451  365349 system_pods.go:89] "kube-controller-manager-addons-241520" [5a837375-b97c-40af-b465-7d1ea919beac] Running
	I1210 06:10:54.784460  365349 system_pods.go:89] "kube-ingress-dns-minikube" [7543ba21-d18b-4d73-9940-71a8dfb241c0] Pending
	I1210 06:10:54.784470  365349 system_pods.go:89] "kube-proxy-srgdx" [7a04de78-15a5-4a0f-b6f4-b7cef4acc511] Running
	I1210 06:10:54.784474  365349 system_pods.go:89] "kube-scheduler-addons-241520" [e176a48a-a983-4774-add2-87513e2748ee] Running
	I1210 06:10:54.784480  365349 system_pods.go:89] "metrics-server-85b7d694d7-rwcgk" [c0c02e94-6958-4eea-9eaa-a67bdb5a9e76] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:10:54.784486  365349 system_pods.go:89] "nvidia-device-plugin-daemonset-qbztj" [05046e52-8283-4767-9fb0-3ea35511d095] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 06:10:54.784494  365349 system_pods.go:89] "registry-6b586f9694-jv6bp" [98fabc13-aaf3-44ca-bcbd-2817949dcb72] Pending
	I1210 06:10:54.784501  365349 system_pods.go:89] "registry-creds-764b6fb674-xsqwd" [bb826bb4-0c23-4177-9e45-94b0b2c61e46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 06:10:54.784504  365349 system_pods.go:89] "registry-proxy-pfbv5" [66beaf69-9d47-4b33-bba2-32ca2a101bcc] Pending
	I1210 06:10:54.784512  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8pzhn" [a9c3f52c-7727-41a9-841c-c1fae4ed4636] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:54.784519  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qgnvq" [ba2995c6-cea0-497b-bd27-d8c59ebe811d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:54.784526  365349 system_pods.go:89] "storage-provisioner" [a864cb16-738a-4b0e-b0db-65485b264f6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:10:54.784542  365349 retry.go:31] will retry after 387.791714ms: missing components: kube-dns
	I1210 06:10:54.844051  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:54.844372  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:55.003664  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:55.179465  365349 system_pods.go:86] 19 kube-system pods found
	I1210 06:10:55.179509  365349 system_pods.go:89] "coredns-66bc5c9577-ds7m5" [4ab8c833-9e84-4a9c-a5a3-00db1cac3a38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:10:55.179529  365349 system_pods.go:89] "csi-hostpath-attacher-0" [51883b17-568a-4e55-98af-0e05b0b82a8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 06:10:55.179537  365349 system_pods.go:89] "csi-hostpath-resizer-0" [1e3ff8b8-58b0-4a82-a161-424bf106360a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 06:10:55.179544  365349 system_pods.go:89] "csi-hostpathplugin-qf6mx" [b90691c5-fbe5-45ea-b6af-52ae35765477] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 06:10:55.179552  365349 system_pods.go:89] "etcd-addons-241520" [267e51ba-ce7e-4d65-9431-6d8524f1085b] Running
	I1210 06:10:55.179557  365349 system_pods.go:89] "kindnet-h9tr4" [b06a5696-c044-44ec-b102-c34cbf0b480b] Running
	I1210 06:10:55.179567  365349 system_pods.go:89] "kube-apiserver-addons-241520" [6bead7dc-7534-41e5-a5cb-dfb46451b561] Running
	I1210 06:10:55.179572  365349 system_pods.go:89] "kube-controller-manager-addons-241520" [5a837375-b97c-40af-b465-7d1ea919beac] Running
	I1210 06:10:55.179579  365349 system_pods.go:89] "kube-ingress-dns-minikube" [7543ba21-d18b-4d73-9940-71a8dfb241c0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 06:10:55.179590  365349 system_pods.go:89] "kube-proxy-srgdx" [7a04de78-15a5-4a0f-b6f4-b7cef4acc511] Running
	I1210 06:10:55.179594  365349 system_pods.go:89] "kube-scheduler-addons-241520" [e176a48a-a983-4774-add2-87513e2748ee] Running
	I1210 06:10:55.179600  365349 system_pods.go:89] "metrics-server-85b7d694d7-rwcgk" [c0c02e94-6958-4eea-9eaa-a67bdb5a9e76] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:10:55.179606  365349 system_pods.go:89] "nvidia-device-plugin-daemonset-qbztj" [05046e52-8283-4767-9fb0-3ea35511d095] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 06:10:55.179617  365349 system_pods.go:89] "registry-6b586f9694-jv6bp" [98fabc13-aaf3-44ca-bcbd-2817949dcb72] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 06:10:55.179623  365349 system_pods.go:89] "registry-creds-764b6fb674-xsqwd" [bb826bb4-0c23-4177-9e45-94b0b2c61e46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 06:10:55.179628  365349 system_pods.go:89] "registry-proxy-pfbv5" [66beaf69-9d47-4b33-bba2-32ca2a101bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 06:10:55.179634  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8pzhn" [a9c3f52c-7727-41a9-841c-c1fae4ed4636] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:55.179643  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qgnvq" [ba2995c6-cea0-497b-bd27-d8c59ebe811d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:55.179651  365349 system_pods.go:89] "storage-provisioner" [a864cb16-738a-4b0e-b0db-65485b264f6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:10:55.179670  365349 retry.go:31] will retry after 394.295586ms: missing components: kube-dns
	I1210 06:10:55.213911  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:55.339217  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:55.339602  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:55.500114  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:55.583642  365349 system_pods.go:86] 19 kube-system pods found
	I1210 06:10:55.583678  365349 system_pods.go:89] "coredns-66bc5c9577-ds7m5" [4ab8c833-9e84-4a9c-a5a3-00db1cac3a38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:10:55.583688  365349 system_pods.go:89] "csi-hostpath-attacher-0" [51883b17-568a-4e55-98af-0e05b0b82a8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 06:10:55.583696  365349 system_pods.go:89] "csi-hostpath-resizer-0" [1e3ff8b8-58b0-4a82-a161-424bf106360a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 06:10:55.583703  365349 system_pods.go:89] "csi-hostpathplugin-qf6mx" [b90691c5-fbe5-45ea-b6af-52ae35765477] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 06:10:55.583708  365349 system_pods.go:89] "etcd-addons-241520" [267e51ba-ce7e-4d65-9431-6d8524f1085b] Running
	I1210 06:10:55.583714  365349 system_pods.go:89] "kindnet-h9tr4" [b06a5696-c044-44ec-b102-c34cbf0b480b] Running
	I1210 06:10:55.583719  365349 system_pods.go:89] "kube-apiserver-addons-241520" [6bead7dc-7534-41e5-a5cb-dfb46451b561] Running
	I1210 06:10:55.583723  365349 system_pods.go:89] "kube-controller-manager-addons-241520" [5a837375-b97c-40af-b465-7d1ea919beac] Running
	I1210 06:10:55.583729  365349 system_pods.go:89] "kube-ingress-dns-minikube" [7543ba21-d18b-4d73-9940-71a8dfb241c0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 06:10:55.583733  365349 system_pods.go:89] "kube-proxy-srgdx" [7a04de78-15a5-4a0f-b6f4-b7cef4acc511] Running
	I1210 06:10:55.583737  365349 system_pods.go:89] "kube-scheduler-addons-241520" [e176a48a-a983-4774-add2-87513e2748ee] Running
	I1210 06:10:55.583743  365349 system_pods.go:89] "metrics-server-85b7d694d7-rwcgk" [c0c02e94-6958-4eea-9eaa-a67bdb5a9e76] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:10:55.583749  365349 system_pods.go:89] "nvidia-device-plugin-daemonset-qbztj" [05046e52-8283-4767-9fb0-3ea35511d095] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 06:10:55.583754  365349 system_pods.go:89] "registry-6b586f9694-jv6bp" [98fabc13-aaf3-44ca-bcbd-2817949dcb72] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 06:10:55.583763  365349 system_pods.go:89] "registry-creds-764b6fb674-xsqwd" [bb826bb4-0c23-4177-9e45-94b0b2c61e46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 06:10:55.583768  365349 system_pods.go:89] "registry-proxy-pfbv5" [66beaf69-9d47-4b33-bba2-32ca2a101bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 06:10:55.583775  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8pzhn" [a9c3f52c-7727-41a9-841c-c1fae4ed4636] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:55.583782  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qgnvq" [ba2995c6-cea0-497b-bd27-d8c59ebe811d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:55.583791  365349 system_pods.go:89] "storage-provisioner" [a864cb16-738a-4b0e-b0db-65485b264f6c] Running
	I1210 06:10:55.583806  365349 retry.go:31] will retry after 561.743673ms: missing components: kube-dns
	I1210 06:10:55.714900  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:55.839523  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:55.840052  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:56.000411  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:56.151027  365349 system_pods.go:86] 19 kube-system pods found
	I1210 06:10:56.151108  365349 system_pods.go:89] "coredns-66bc5c9577-ds7m5" [4ab8c833-9e84-4a9c-a5a3-00db1cac3a38] Running
	I1210 06:10:56.151136  365349 system_pods.go:89] "csi-hostpath-attacher-0" [51883b17-568a-4e55-98af-0e05b0b82a8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 06:10:56.151162  365349 system_pods.go:89] "csi-hostpath-resizer-0" [1e3ff8b8-58b0-4a82-a161-424bf106360a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 06:10:56.151207  365349 system_pods.go:89] "csi-hostpathplugin-qf6mx" [b90691c5-fbe5-45ea-b6af-52ae35765477] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 06:10:56.151229  365349 system_pods.go:89] "etcd-addons-241520" [267e51ba-ce7e-4d65-9431-6d8524f1085b] Running
	I1210 06:10:56.151252  365349 system_pods.go:89] "kindnet-h9tr4" [b06a5696-c044-44ec-b102-c34cbf0b480b] Running
	I1210 06:10:56.151284  365349 system_pods.go:89] "kube-apiserver-addons-241520" [6bead7dc-7534-41e5-a5cb-dfb46451b561] Running
	I1210 06:10:56.151310  365349 system_pods.go:89] "kube-controller-manager-addons-241520" [5a837375-b97c-40af-b465-7d1ea919beac] Running
	I1210 06:10:56.151337  365349 system_pods.go:89] "kube-ingress-dns-minikube" [7543ba21-d18b-4d73-9940-71a8dfb241c0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 06:10:56.151387  365349 system_pods.go:89] "kube-proxy-srgdx" [7a04de78-15a5-4a0f-b6f4-b7cef4acc511] Running
	I1210 06:10:56.151412  365349 system_pods.go:89] "kube-scheduler-addons-241520" [e176a48a-a983-4774-add2-87513e2748ee] Running
	I1210 06:10:56.151432  365349 system_pods.go:89] "metrics-server-85b7d694d7-rwcgk" [c0c02e94-6958-4eea-9eaa-a67bdb5a9e76] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:10:56.151454  365349 system_pods.go:89] "nvidia-device-plugin-daemonset-qbztj" [05046e52-8283-4767-9fb0-3ea35511d095] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 06:10:56.151487  365349 system_pods.go:89] "registry-6b586f9694-jv6bp" [98fabc13-aaf3-44ca-bcbd-2817949dcb72] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 06:10:56.151512  365349 system_pods.go:89] "registry-creds-764b6fb674-xsqwd" [bb826bb4-0c23-4177-9e45-94b0b2c61e46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 06:10:56.151537  365349 system_pods.go:89] "registry-proxy-pfbv5" [66beaf69-9d47-4b33-bba2-32ca2a101bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 06:10:56.151561  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8pzhn" [a9c3f52c-7727-41a9-841c-c1fae4ed4636] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:56.151594  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qgnvq" [ba2995c6-cea0-497b-bd27-d8c59ebe811d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:56.151620  365349 system_pods.go:89] "storage-provisioner" [a864cb16-738a-4b0e-b0db-65485b264f6c] Running
	I1210 06:10:56.151648  365349 system_pods.go:126] duration metric: took 1.650797257s to wait for k8s-apps to be running ...
	I1210 06:10:56.151672  365349 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:10:56.151761  365349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:10:56.173744  365349 system_svc.go:56] duration metric: took 22.063143ms WaitForService to wait for kubelet
	I1210 06:10:56.173775  365349 kubeadm.go:587] duration metric: took 18.531070355s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:10:56.173792  365349 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:10:56.176892  365349 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1210 06:10:56.176925  365349 node_conditions.go:123] node cpu capacity is 2
	I1210 06:10:56.176940  365349 node_conditions.go:105] duration metric: took 3.142529ms to run NodePressure ...
	I1210 06:10:56.176953  365349 start.go:242] waiting for startup goroutines ...
	I1210 06:10:56.214154  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:56.340049  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:56.340184  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:56.500615  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:56.714221  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:56.838995  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:56.839448  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:57.001646  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:57.214355  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:57.337876  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:57.339766  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:57.500028  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:57.730081  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:57.837849  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:57.838007  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:58.002580  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:58.215120  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:58.337953  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:58.338105  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:58.500000  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:58.714645  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:58.839451  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:58.839797  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:59.002035  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:59.218119  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:59.338243  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:59.338494  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:59.500425  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:59.719097  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:59.837976  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:59.838688  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:00.017871  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:00.242280  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:00.355927  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:00.368691  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:00.500621  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:00.715788  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:00.839484  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:00.839740  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:01.001745  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:01.214078  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:01.339024  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:01.339319  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:01.500353  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:01.715089  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:01.838540  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:01.839039  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:02.001390  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:02.214851  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:02.338317  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:02.340447  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:02.500668  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:02.714704  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:02.839672  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:02.840071  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:03.007211  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:03.215979  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:03.339019  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:03.339536  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:03.508369  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:03.714805  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:03.842923  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:03.843789  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:04.003188  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:04.216672  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:04.372149  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:04.372681  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:04.519104  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:04.715013  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:04.845935  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:04.846249  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:05.001365  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:05.213861  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:05.337737  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:05.338996  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:05.500470  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:05.714514  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:05.838169  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:05.838887  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:06.000735  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:06.214317  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:06.340486  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:06.340976  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:06.502519  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:06.714433  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:06.839883  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:06.843709  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:07.001418  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:07.215647  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:07.338439  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:07.338554  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:07.500680  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:07.722174  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:07.839559  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:07.839801  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:08.002659  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:08.214462  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:08.337832  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:08.338323  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:08.500487  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:08.714457  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:08.840399  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:08.840493  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:09.002014  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:09.214416  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:09.337649  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:09.337950  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:09.499777  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:09.730317  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:09.838773  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:09.840809  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:10.024853  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:10.214499  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:10.338923  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:10.339163  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:10.499767  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:10.714146  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:10.862511  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:10.862758  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:11.001412  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:11.214159  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:11.339510  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:11.340754  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:11.500746  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:11.719286  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:11.838965  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:11.838970  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:12.003828  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:12.214544  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:12.338067  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:12.338246  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:12.500868  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:12.714924  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:12.840328  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:12.840779  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:13.000859  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:13.215536  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:13.338305  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:13.338485  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:13.501124  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:13.714577  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:13.838665  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:13.839755  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:14.001659  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:14.214727  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:14.339032  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:14.339420  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:14.500839  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:14.715317  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:14.838863  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:14.839050  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:15.001268  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:15.214751  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:15.337067  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:15.339762  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:15.500013  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:15.714406  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:15.836545  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:15.838741  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:16.001738  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:16.214115  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:16.338064  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:16.339343  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:16.502402  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:16.713845  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:16.838428  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:16.838611  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:17.002052  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:17.214856  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:17.341035  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:17.341639  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:17.500473  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:17.713910  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:17.837441  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:17.838163  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:18.008103  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:18.213513  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:18.339532  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:18.339740  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:18.501001  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:18.714701  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:18.847306  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:18.848557  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:19.002212  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:19.214595  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:19.336718  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:19.339337  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:19.500882  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:19.714163  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:19.840479  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:19.840557  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:20.007360  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:20.215117  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:20.338143  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:20.338871  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:20.499899  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:20.715042  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:20.839180  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:20.839715  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:21.007217  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:21.215087  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:21.337688  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:21.337863  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:21.499751  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:21.714079  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:21.839188  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:21.839407  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:22.001840  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:22.214666  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:22.340400  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:22.340823  365349 kapi.go:107] duration metric: took 36.507441437s to wait for kubernetes.io/minikube-addons=registry ...
	I1210 06:11:22.500130  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:22.714261  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:22.838127  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:23.002011  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:23.221771  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:23.337692  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:23.501055  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:23.715018  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:23.838931  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:24.001154  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:24.215099  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:24.338469  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:24.500670  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:24.714687  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:24.838466  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:25.002400  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:25.214019  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:25.338592  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:25.501034  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:25.714195  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:25.838393  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:26.003500  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:26.214132  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:26.339186  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:26.500650  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:26.716272  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:26.838056  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:27.008434  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:27.215344  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:27.348563  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:27.503194  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:27.714420  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:27.838441  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:28.001916  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:28.214778  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:28.337882  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:28.499991  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:28.714033  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:28.838037  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:29.000404  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:29.214808  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:29.338473  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:29.501262  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:29.715165  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:29.838914  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:30.000940  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:30.215906  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:30.339440  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:30.501003  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:30.722333  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:30.837688  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:31.006302  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:31.214087  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:31.339855  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:31.500910  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:31.714517  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:31.838011  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:32.000654  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:32.213833  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:32.343639  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:32.500704  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:32.714288  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:32.838353  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:33.002356  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:33.214485  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:33.337916  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:33.500988  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:33.714779  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:33.838247  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:34.001686  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:34.213740  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:34.338325  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:34.500441  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:34.715168  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:34.838670  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:35.018030  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:35.215555  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:35.337493  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:35.500794  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:35.714099  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:35.838391  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:36.001565  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:36.214736  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:36.338312  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:36.500273  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:36.714050  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:36.838668  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:37.000821  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:37.215778  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:37.338615  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:37.500846  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:37.714305  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:37.838276  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:38.001152  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:38.214014  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:38.338290  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:38.500316  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:38.714330  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:38.841143  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:39.003214  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:39.218919  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:39.338205  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:39.502296  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:39.716077  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:39.838596  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:40.006691  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:40.223579  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:40.337747  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:40.501320  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:40.729697  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:40.838406  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:41.003487  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:41.214773  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:41.346630  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:41.500233  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:41.714202  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:41.839120  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:42.001178  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:42.226380  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:42.361477  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:42.500646  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:42.716106  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:42.838295  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:43.001176  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:43.214698  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:43.337940  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:43.499954  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:43.714679  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:43.838239  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:44.001048  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:44.214674  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:44.338054  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:44.500177  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:44.714696  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:44.838687  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:45.001665  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:45.234637  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:45.341275  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:45.500582  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:45.714335  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:45.837769  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:46.000189  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:46.216939  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:46.338229  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:46.500285  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:46.714883  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:46.837886  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:47.017594  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:47.214232  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:47.338326  365349 kapi.go:107] duration metric: took 1m1.503927338s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 06:11:47.576367  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:47.715045  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:48.002007  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:48.214898  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:48.500129  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:48.714413  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:49.002324  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:49.215852  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:49.500436  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:49.714941  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:50.000968  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:50.215152  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:50.500608  365349 kapi.go:107] duration metric: took 1m1.503782741s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 06:11:50.503876  365349 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-241520 cluster.
	I1210 06:11:50.506859  365349 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 06:11:50.509668  365349 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 06:11:50.715139  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:51.215334  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:51.713822  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:52.235382  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:52.717702  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:53.214644  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:53.714725  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:54.214311  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:54.714739  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:55.218472  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:55.714552  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:56.214757  365349 kapi.go:107] duration metric: took 1m10.004284177s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 06:11:56.217977  365349 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, default-storageclass, inspektor-gadget, ingress-dns, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1210 06:11:56.221114  365349 addons.go:530] duration metric: took 1m18.578149743s for enable addons: enabled=[registry-creds amd-gpu-device-plugin nvidia-device-plugin cloud-spanner default-storageclass inspektor-gadget ingress-dns storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1210 06:11:56.221180  365349 start.go:247] waiting for cluster config update ...
	I1210 06:11:56.221242  365349 start.go:256] writing updated cluster config ...
	I1210 06:11:56.221570  365349 ssh_runner.go:195] Run: rm -f paused
	I1210 06:11:56.226619  365349 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:11:56.230833  365349 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ds7m5" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.237489  365349 pod_ready.go:94] pod "coredns-66bc5c9577-ds7m5" is "Ready"
	I1210 06:11:56.237520  365349 pod_ready.go:86] duration metric: took 6.655308ms for pod "coredns-66bc5c9577-ds7m5" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.239836  365349 pod_ready.go:83] waiting for pod "etcd-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.245140  365349 pod_ready.go:94] pod "etcd-addons-241520" is "Ready"
	I1210 06:11:56.245168  365349 pod_ready.go:86] duration metric: took 5.251314ms for pod "etcd-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.247745  365349 pod_ready.go:83] waiting for pod "kube-apiserver-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.252527  365349 pod_ready.go:94] pod "kube-apiserver-addons-241520" is "Ready"
	I1210 06:11:56.252557  365349 pod_ready.go:86] duration metric: took 4.785878ms for pod "kube-apiserver-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.255380  365349 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.631193  365349 pod_ready.go:94] pod "kube-controller-manager-addons-241520" is "Ready"
	I1210 06:11:56.631228  365349 pod_ready.go:86] duration metric: took 375.824764ms for pod "kube-controller-manager-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.830389  365349 pod_ready.go:83] waiting for pod "kube-proxy-srgdx" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:57.230999  365349 pod_ready.go:94] pod "kube-proxy-srgdx" is "Ready"
	I1210 06:11:57.231069  365349 pod_ready.go:86] duration metric: took 400.650182ms for pod "kube-proxy-srgdx" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:57.430658  365349 pod_ready.go:83] waiting for pod "kube-scheduler-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:57.830850  365349 pod_ready.go:94] pod "kube-scheduler-addons-241520" is "Ready"
	I1210 06:11:57.830878  365349 pod_ready.go:86] duration metric: took 400.148657ms for pod "kube-scheduler-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:57.830892  365349 pod_ready.go:40] duration metric: took 1.604237464s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:11:57.886532  365349 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1210 06:11:57.890225  365349 out.go:179] * Done! kubectl is now configured to use "addons-241520" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 06:13:31 addons-241520 crio[829]: time="2025-12-10T06:13:31.529963384Z" level=info msg="Removed pod sandbox: 848504909f5cd8a637e4743c0d15119b8bce96ac817544667408b6def16893bb" id=0e915eec-b590-4b5a-ae20-496d5862447d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.187650439Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-zb98t/POD" id=3a526b94-e130-48e8-9190-031ce2ac9fe7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.187719396Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.205961147Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-zb98t Namespace:default ID:7343ed31203c3aefe169b35a02055dd1b7b028f2fb62fffb6658634935b9409b UID:70669ccd-53bb-4d4e-8c05-9c2273ade686 NetNS:/var/run/netns/5962302c-480d-44ea-89ce-89dace50bee5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001767670}] Aliases:map[]}"
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.211748558Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-zb98t to CNI network \"kindnet\" (type=ptp)"
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.248461856Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-zb98t Namespace:default ID:7343ed31203c3aefe169b35a02055dd1b7b028f2fb62fffb6658634935b9409b UID:70669ccd-53bb-4d4e-8c05-9c2273ade686 NetNS:/var/run/netns/5962302c-480d-44ea-89ce-89dace50bee5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001767670}] Aliases:map[]}"
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.248777586Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-zb98t for CNI network kindnet (type=ptp)"
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.274681935Z" level=info msg="Ran pod sandbox 7343ed31203c3aefe169b35a02055dd1b7b028f2fb62fffb6658634935b9409b with infra container: default/hello-world-app-5d498dc89-zb98t/POD" id=3a526b94-e130-48e8-9190-031ce2ac9fe7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.279271354Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=b3258a78-c619-46a4-a5d7-c3017a3a35b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.279568679Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=b3258a78-c619-46a4-a5d7-c3017a3a35b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.279677415Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=b3258a78-c619-46a4-a5d7-c3017a3a35b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.283141541Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=58c951cd-5eb1-4dcc-8e59-b5de2759c3f9 name=/runtime.v1.ImageService/PullImage
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.289298294Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.895347393Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=58c951cd-5eb1-4dcc-8e59-b5de2759c3f9 name=/runtime.v1.ImageService/PullImage
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.896649911Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=91fc170e-5deb-4ba0-8a3b-2c4321f6c889 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.89845185Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=8a33016e-868c-4e7d-b90b-9bbdfec3911f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.908797581Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-zb98t/hello-world-app" id=89b8cf8d-aa9a-4114-b5a6-3a1ef67776ce name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.909051968Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.918478088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.918848998Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d5ced6d3b740d520160c17db9a4052a3b4f1af8b603f02ec0768588e7124fa8e/merged/etc/passwd: no such file or directory"
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.918952089Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d5ced6d3b740d520160c17db9a4052a3b4f1af8b603f02ec0768588e7124fa8e/merged/etc/group: no such file or directory"
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.919291294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.962193732Z" level=info msg="Created container e3c46cc507aa8eb9585806cff2a5a7f3a6eb054984edb8632e7dcb0c245218fe: default/hello-world-app-5d498dc89-zb98t/hello-world-app" id=89b8cf8d-aa9a-4114-b5a6-3a1ef67776ce name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.965867765Z" level=info msg="Starting container: e3c46cc507aa8eb9585806cff2a5a7f3a6eb054984edb8632e7dcb0c245218fe" id=04a04cbf-7d07-4a2c-aa2b-9d36226b1ecb name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:14:57 addons-241520 crio[829]: time="2025-12-10T06:14:57.970331938Z" level=info msg="Started container" PID=7865 containerID=e3c46cc507aa8eb9585806cff2a5a7f3a6eb054984edb8632e7dcb0c245218fe description=default/hello-world-app-5d498dc89-zb98t/hello-world-app id=04a04cbf-7d07-4a2c-aa2b-9d36226b1ecb name=/runtime.v1.RuntimeService/StartContainer sandboxID=7343ed31203c3aefe169b35a02055dd1b7b028f2fb62fffb6658634935b9409b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	e3c46cc507aa8       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   7343ed31203c3       hello-world-app-5d498dc89-zb98t             default
	8c915e8f84920       public.ecr.aws/nginx/nginx@sha256:6224130b55f5d4f555846ebdedec6ce07822ebf205b9c1b77c2fd91abab6eb25                                           2 minutes ago            Running             nginx                                    0                   e024644fea584       nginx                                       default
	ed0a8cabffe49       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   1658f14833f78       busybox                                     default
	e3e105248b47d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   f4f06b7d19eff       csi-hostpathplugin-qf6mx                    kube-system
	15f67f2cc14b2       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   f4f06b7d19eff       csi-hostpathplugin-qf6mx                    kube-system
	706727cbc03fa       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   f4f06b7d19eff       csi-hostpathplugin-qf6mx                    kube-system
	32f373b06842f       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   f4f06b7d19eff       csi-hostpathplugin-qf6mx                    kube-system
	cc2087138549b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   f4f06b7d19eff       csi-hostpathplugin-qf6mx                    kube-system
	791a0461acaf4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   ac1f27e6371ab       gcp-auth-78565c9fb4-744nw                   gcp-auth
	62130e3244ed5       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             3 minutes ago            Running             controller                               0                   da52382fd2f26       ingress-nginx-controller-85d4c799dd-kqczr   ingress-nginx
	e832e618b9556       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            3 minutes ago            Running             gadget                                   0                   494ccea905237       gadget-2srh4                                gadget
	631ab57806ae3       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   e4bb112c099eb       local-path-provisioner-648f6765c9-kvsb5     local-path-storage
	da7b3d50307f0       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   3365eb19c69d7       yakd-dashboard-5ff678cb9-v86q9              yakd-dashboard
	3edcc847365a8       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   ed5914fb64905       kube-ingress-dns-minikube                   kube-system
	e9f72b624d9a0       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   374978dee187c       registry-proxy-pfbv5                        kube-system
	b3d13279bb1f9       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   f4f06b7d19eff       csi-hostpathplugin-qf6mx                    kube-system
	d5d967fa674ce       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             3 minutes ago            Exited              patch                                    2                   69e58268f189f       ingress-nginx-admission-patch-pvxz6         ingress-nginx
	b11ba380657e9       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   6a7da179fadeb       nvidia-device-plugin-daemonset-qbztj        kube-system
	7c4c997d687b5       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   a4aa3a7fa29d7       snapshot-controller-7d9fbc56b8-qgnvq        kube-system
	7cf2b1b068ab5       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   f711387c879a4       registry-6b586f9694-jv6bp                   kube-system
	ec50b512c8e8c       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   1a0ae0a08be91       csi-hostpath-resizer-0                      kube-system
	3e78fb6d659d3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              create                                   0                   fc125abb512bd       ingress-nginx-admission-create-hzj5c        ingress-nginx
	586ddad1ca64c       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   c4c48253d25db       cloud-spanner-emulator-5bdddb765-fb462      default
	fcb9b12f636ff       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   e67386f467ee0       metrics-server-85b7d694d7-rwcgk             kube-system
	5bf1539b0ce43       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   44077c2eb672f       snapshot-controller-7d9fbc56b8-8pzhn        kube-system
	ccf62bd56b5d1       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   70166cf12cba7       csi-hostpath-attacher-0                     kube-system
	c310e24a2efb9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   294cad811cc34       coredns-66bc5c9577-ds7m5                    kube-system
	75a53210a6a83       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                                                             4 minutes ago            Running             storage-provisioner                      0                   f099c08caac5c       storage-provisioner                         kube-system
	c0e0a1b2a34ab       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1                                           4 minutes ago            Running             kindnet-cni                              0                   b9d924906a5e3       kindnet-h9tr4                               kube-system
	ba463a83af075       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                                                             4 minutes ago            Running             kube-proxy                               0                   be28fb6b5d96a       kube-proxy-srgdx                            kube-system
	a8f3303a4f28e       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                                                             4 minutes ago            Running             kube-scheduler                           0                   311c46c40e39c       kube-scheduler-addons-241520                kube-system
	a33aa3e9cb946       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                                                             4 minutes ago            Running             kube-apiserver                           0                   9868d4b159432       kube-apiserver-addons-241520                kube-system
	ebaecf86934b5       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                                                             4 minutes ago            Running             kube-controller-manager                  0                   cf61a41864df6       kube-controller-manager-addons-241520       kube-system
	88969ee781c52       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             4 minutes ago            Running             etcd                                     0                   a058142f24f09       etcd-addons-241520                          kube-system
	
	
	==> coredns [c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749] <==
	[INFO] 10.244.0.12:40772 - 53967 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.005175696s
	[INFO] 10.244.0.12:40772 - 35629 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000195556s
	[INFO] 10.244.0.12:40772 - 48678 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000128117s
	[INFO] 10.244.0.12:47976 - 56897 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000236156s
	[INFO] 10.244.0.12:47976 - 56660 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000112322s
	[INFO] 10.244.0.12:34355 - 10905 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000102115s
	[INFO] 10.244.0.12:34355 - 11151 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00016024s
	[INFO] 10.244.0.12:51487 - 32619 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000118615s
	[INFO] 10.244.0.12:51487 - 32414 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000173558s
	[INFO] 10.244.0.12:52484 - 48828 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001479607s
	[INFO] 10.244.0.12:52484 - 48384 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001462991s
	[INFO] 10.244.0.12:33946 - 16628 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000123678s
	[INFO] 10.244.0.12:33946 - 16216 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157147s
	[INFO] 10.244.0.21:38503 - 62804 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000174296s
	[INFO] 10.244.0.21:53061 - 64971 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000514954s
	[INFO] 10.244.0.21:48327 - 59433 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000160463s
	[INFO] 10.244.0.21:35758 - 16802 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000231389s
	[INFO] 10.244.0.21:56658 - 17051 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124465s
	[INFO] 10.244.0.21:47054 - 40729 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000154784s
	[INFO] 10.244.0.21:43843 - 42881 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003955152s
	[INFO] 10.244.0.21:40670 - 1548 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003206389s
	[INFO] 10.244.0.21:47293 - 52086 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001568986s
	[INFO] 10.244.0.21:43737 - 49819 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002448452s
	[INFO] 10.244.0.23:39934 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000230798s
	[INFO] 10.244.0.23:54074 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000133885s
	
	
	==> describe nodes <==
	Name:               addons-241520
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-241520
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=addons-241520
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_10_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-241520
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-241520"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:10:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-241520
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:14:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:14:36 +0000   Wed, 10 Dec 2025 06:10:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:14:36 +0000   Wed, 10 Dec 2025 06:10:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:14:36 +0000   Wed, 10 Dec 2025 06:10:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:14:36 +0000   Wed, 10 Dec 2025 06:10:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-241520
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                6d39ab4b-4b9e-4f06-8c01-e4cbe723bf1a
	  Boot ID:                    7e517eb4-cdae-4e97-a158-8132b5e595bf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     cloud-spanner-emulator-5bdddb765-fb462       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  default                     hello-world-app-5d498dc89-zb98t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-2srh4                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  gcp-auth                    gcp-auth-78565c9fb4-744nw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-kqczr    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m13s
	  kube-system                 coredns-66bc5c9577-ds7m5                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m21s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 csi-hostpathplugin-qf6mx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 etcd-addons-241520                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m27s
	  kube-system                 kindnet-h9tr4                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m21s
	  kube-system                 kube-apiserver-addons-241520                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-controller-manager-addons-241520        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-proxy-srgdx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-scheduler-addons-241520                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 metrics-server-85b7d694d7-rwcgk              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m14s
	  kube-system                 nvidia-device-plugin-daemonset-qbztj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 registry-6b586f9694-jv6bp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 registry-creds-764b6fb674-xsqwd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 registry-proxy-pfbv5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 snapshot-controller-7d9fbc56b8-8pzhn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 snapshot-controller-7d9fbc56b8-qgnvq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  local-path-storage          local-path-provisioner-648f6765c9-kvsb5      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-v86q9               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m20s                  kube-proxy       
	  Normal   Starting                 4m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m34s (x8 over 4m34s)  kubelet          Node addons-241520 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m34s (x8 over 4m34s)  kubelet          Node addons-241520 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m34s (x8 over 4m34s)  kubelet          Node addons-241520 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m27s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m27s                  kubelet          Node addons-241520 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m27s                  kubelet          Node addons-241520 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m27s                  kubelet          Node addons-241520 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m22s                  node-controller  Node addons-241520 event: Registered Node addons-241520 in Controller
	  Normal   NodeReady                4m5s                   kubelet          Node addons-241520 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec10 03:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015165] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514331] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032961] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807794] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.310854] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 04:51] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000001f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000bd7dad86{9P.session} n=00000000db0bc116
	[  +0.001142] FS-Cache: O-key=[10] '34323936323939323530'
	[  +0.000773] FS-Cache: N-cookie c=00000020 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000bd7dad86{9P.session} n=00000000040fb2e3
	[  +0.001079] FS-Cache: N-key=[10] '34323936323939323530'
	[Dec10 04:56] hrtimer: interrupt took 8286122 ns
	[Dec10 06:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 06:10] overlayfs: idmapped layers are currently not supported
	[ +21.611451] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f] <==
	{"level":"warn","ts":"2025-12-10T06:10:26.912675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:26.934694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:26.993601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.031727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.059084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.093715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.116616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.149948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.168659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.192011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.255274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.265985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.310462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.320470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.359137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.394001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.415382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.440035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.624900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:46.601060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:46.624556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:57.696811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:57.720685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:57.749093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:57.758100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44356","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [791a0461acaf48e79d524ccda615028355ee7e8a80133011ecbf61a56f7b35c8] <==
	2025/12/10 06:11:49 GCP Auth Webhook started!
	2025/12/10 06:11:58 Ready to marshal response ...
	2025/12/10 06:11:58 Ready to write response ...
	2025/12/10 06:11:58 Ready to marshal response ...
	2025/12/10 06:11:58 Ready to write response ...
	2025/12/10 06:11:58 Ready to marshal response ...
	2025/12/10 06:11:58 Ready to write response ...
	2025/12/10 06:12:19 Ready to marshal response ...
	2025/12/10 06:12:19 Ready to write response ...
	2025/12/10 06:12:24 Ready to marshal response ...
	2025/12/10 06:12:24 Ready to write response ...
	2025/12/10 06:12:24 Ready to marshal response ...
	2025/12/10 06:12:24 Ready to write response ...
	2025/12/10 06:12:33 Ready to marshal response ...
	2025/12/10 06:12:33 Ready to write response ...
	2025/12/10 06:12:35 Ready to marshal response ...
	2025/12/10 06:12:35 Ready to write response ...
	2025/12/10 06:12:50 Ready to marshal response ...
	2025/12/10 06:12:50 Ready to write response ...
	2025/12/10 06:13:04 Ready to marshal response ...
	2025/12/10 06:13:04 Ready to write response ...
	2025/12/10 06:14:56 Ready to marshal response ...
	2025/12/10 06:14:56 Ready to write response ...
	
	
	==> kernel <==
	 06:14:59 up  2:57,  0 user,  load average: 0.42, 1.61, 1.62
	Linux addons-241520 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7] <==
	I1210 06:12:53.447045       1 main.go:301] handling current node
	I1210 06:13:03.447442       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:13:03.447560       1 main.go:301] handling current node
	I1210 06:13:13.446858       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:13:13.446890       1 main.go:301] handling current node
	I1210 06:13:23.449337       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:13:23.449372       1 main.go:301] handling current node
	I1210 06:13:33.456357       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:13:33.456394       1 main.go:301] handling current node
	I1210 06:13:43.446978       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:13:43.447087       1 main.go:301] handling current node
	I1210 06:13:53.450411       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:13:53.450447       1 main.go:301] handling current node
	I1210 06:14:03.455984       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:14:03.456019       1 main.go:301] handling current node
	I1210 06:14:13.447084       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:14:13.447121       1 main.go:301] handling current node
	I1210 06:14:23.449579       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:14:23.449689       1 main.go:301] handling current node
	I1210 06:14:33.453412       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:14:33.453450       1 main.go:301] handling current node
	I1210 06:14:43.446843       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:14:43.446908       1 main.go:301] handling current node
	I1210 06:14:53.452590       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:14:53.452625       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554] <==
	E1210 06:10:53.956415       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.152.36:443: connect: connection refused" logger="UnhandledError"
	W1210 06:10:53.956980       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.152.36:443: connect: connection refused
	E1210 06:10:53.957015       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.152.36:443: connect: connection refused" logger="UnhandledError"
	W1210 06:10:54.085228       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.152.36:443: connect: connection refused
	E1210 06:10:54.085270       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.152.36:443: connect: connection refused" logger="UnhandledError"
	W1210 06:10:57.694308       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 06:10:57.713610       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 06:10:57.742283       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1210 06:10:57.757833       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1210 06:11:04.380545       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.194.81:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.194.81:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.194.81:443: connect: connection refused" logger="UnhandledError"
	W1210 06:11:04.380890       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 06:11:04.380961       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 06:11:04.383579       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.194.81:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.194.81:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.194.81:443: connect: connection refused" logger="UnhandledError"
	E1210 06:11:04.390496       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.194.81:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.194.81:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.194.81:443: connect: connection refused" logger="UnhandledError"
	I1210 06:11:04.535045       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1210 06:12:09.211961       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46048: use of closed network connection
	E1210 06:12:09.348795       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46068: use of closed network connection
	I1210 06:12:35.523156       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1210 06:12:35.859080       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.150.24"}
	I1210 06:12:57.810092       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1210 06:12:59.439904       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1210 06:14:57.061476       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.4.238"}
	
	
	==> kube-controller-manager [ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272] <==
	I1210 06:10:36.503150       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 06:10:36.503162       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 06:10:36.503292       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 06:10:36.503371       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-241520"
	I1210 06:10:36.503417       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1210 06:10:36.504611       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:10:36.511995       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 06:10:36.512120       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:10:36.529055       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 06:10:36.529244       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 06:10:36.529717       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 06:10:36.530897       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 06:10:36.530939       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 06:10:36.531013       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 06:10:36.531059       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 06:10:36.531537       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 06:10:36.537878       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 06:10:36.540306       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 06:10:36.547675       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 06:10:56.506320       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1210 06:11:06.489331       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1210 06:11:06.489383       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1210 06:11:06.521862       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1210 06:11:06.590330       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:11:06.622323       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f] <==
	I1210 06:10:38.602643       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:10:38.723132       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:10:38.823486       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:10:38.823521       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1210 06:10:38.823587       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:10:38.882875       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:10:38.882923       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:10:38.892975       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:10:38.893523       1 server.go:527] "Version info" version="v1.34.3"
	I1210 06:10:38.893545       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:10:38.895413       1 config.go:200] "Starting service config controller"
	I1210 06:10:38.895442       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:10:38.895461       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:10:38.895465       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:10:38.895493       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:10:38.895497       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:10:38.896141       1 config.go:309] "Starting node config controller"
	I1210 06:10:38.896159       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:10:38.896165       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:10:38.995602       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:10:38.995643       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:10:38.995686       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af] <==
	E1210 06:10:28.862939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1210 06:10:28.863077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 06:10:28.863127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:10:28.863171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 06:10:28.869050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 06:10:28.869316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:10:28.869375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:10:28.869432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:10:28.869486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 06:10:28.869587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 06:10:28.869627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 06:10:28.872344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:10:28.872414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 06:10:28.872464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 06:10:28.872533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 06:10:28.872616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:10:28.872738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:10:28.872792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 06:10:28.872891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:10:29.686570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:10:29.793465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:10:29.804217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:10:29.957405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:10:30.456965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1210 06:10:32.956813       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:13:11 addons-241520 kubelet[1999]: I1210 06:13:11.434122    1999 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq52l\" (UniqueName: \"kubernetes.io/projected/a854d747-e67a-4120-9e78-585dad838531-kube-api-access-cq52l\") pod \"a854d747-e67a-4120-9e78-585dad838531\" (UID: \"a854d747-e67a-4120-9e78-585dad838531\") "
	Dec 10 06:13:11 addons-241520 kubelet[1999]: I1210 06:13:11.434181    1999 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a854d747-e67a-4120-9e78-585dad838531-gcp-creds\") pod \"a854d747-e67a-4120-9e78-585dad838531\" (UID: \"a854d747-e67a-4120-9e78-585dad838531\") "
	Dec 10 06:13:11 addons-241520 kubelet[1999]: I1210 06:13:11.434339    1999 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^4944d495-d58f-11f0-9443-7ef19f31b0ce\") pod \"a854d747-e67a-4120-9e78-585dad838531\" (UID: \"a854d747-e67a-4120-9e78-585dad838531\") "
	Dec 10 06:13:11 addons-241520 kubelet[1999]: I1210 06:13:11.434748    1999 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a854d747-e67a-4120-9e78-585dad838531-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "a854d747-e67a-4120-9e78-585dad838531" (UID: "a854d747-e67a-4120-9e78-585dad838531"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 10 06:13:11 addons-241520 kubelet[1999]: I1210 06:13:11.441782    1999 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^4944d495-d58f-11f0-9443-7ef19f31b0ce" (OuterVolumeSpecName: "task-pv-storage") pod "a854d747-e67a-4120-9e78-585dad838531" (UID: "a854d747-e67a-4120-9e78-585dad838531"). InnerVolumeSpecName "pvc-0afb2a18-afc0-40b7-a621-206cdd20b5de". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 10 06:13:11 addons-241520 kubelet[1999]: I1210 06:13:11.441942    1999 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a854d747-e67a-4120-9e78-585dad838531-kube-api-access-cq52l" (OuterVolumeSpecName: "kube-api-access-cq52l") pod "a854d747-e67a-4120-9e78-585dad838531" (UID: "a854d747-e67a-4120-9e78-585dad838531"). InnerVolumeSpecName "kube-api-access-cq52l". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 10 06:13:11 addons-241520 kubelet[1999]: I1210 06:13:11.466496    1999 scope.go:117] "RemoveContainer" containerID="cb5a77b009dd861607f09b229048b89362a7e38a0586521ddc077078f3cf499f"
	Dec 10 06:13:11 addons-241520 kubelet[1999]: I1210 06:13:11.486020    1999 scope.go:117] "RemoveContainer" containerID="cb5a77b009dd861607f09b229048b89362a7e38a0586521ddc077078f3cf499f"
	Dec 10 06:13:11 addons-241520 kubelet[1999]: E1210 06:13:11.486672    1999 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb5a77b009dd861607f09b229048b89362a7e38a0586521ddc077078f3cf499f\": container with ID starting with cb5a77b009dd861607f09b229048b89362a7e38a0586521ddc077078f3cf499f not found: ID does not exist" containerID="cb5a77b009dd861607f09b229048b89362a7e38a0586521ddc077078f3cf499f"
	Dec 10 06:13:11 addons-241520 kubelet[1999]: I1210 06:13:11.486715    1999 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb5a77b009dd861607f09b229048b89362a7e38a0586521ddc077078f3cf499f"} err="failed to get container status \"cb5a77b009dd861607f09b229048b89362a7e38a0586521ddc077078f3cf499f\": rpc error: code = NotFound desc = could not find container \"cb5a77b009dd861607f09b229048b89362a7e38a0586521ddc077078f3cf499f\": container with ID starting with cb5a77b009dd861607f09b229048b89362a7e38a0586521ddc077078f3cf499f not found: ID does not exist"
	Dec 10 06:13:11 addons-241520 kubelet[1999]: I1210 06:13:11.535048    1999 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cq52l\" (UniqueName: \"kubernetes.io/projected/a854d747-e67a-4120-9e78-585dad838531-kube-api-access-cq52l\") on node \"addons-241520\" DevicePath \"\""
	Dec 10 06:13:11 addons-241520 kubelet[1999]: I1210 06:13:11.535248    1999 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a854d747-e67a-4120-9e78-585dad838531-gcp-creds\") on node \"addons-241520\" DevicePath \"\""
	Dec 10 06:13:11 addons-241520 kubelet[1999]: I1210 06:13:11.535333    1999 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0afb2a18-afc0-40b7-a621-206cdd20b5de\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^4944d495-d58f-11f0-9443-7ef19f31b0ce\") on node \"addons-241520\" "
	Dec 10 06:13:11 addons-241520 kubelet[1999]: I1210 06:13:11.545964    1999 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-0afb2a18-afc0-40b7-a621-206cdd20b5de" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^4944d495-d58f-11f0-9443-7ef19f31b0ce") on node "addons-241520"
	Dec 10 06:13:11 addons-241520 kubelet[1999]: I1210 06:13:11.635693    1999 reconciler_common.go:299] "Volume detached for volume \"pvc-0afb2a18-afc0-40b7-a621-206cdd20b5de\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^4944d495-d58f-11f0-9443-7ef19f31b0ce\") on node \"addons-241520\" DevicePath \"\""
	Dec 10 06:13:13 addons-241520 kubelet[1999]: I1210 06:13:13.342840    1999 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a854d747-e67a-4120-9e78-585dad838531" path="/var/lib/kubelet/pods/a854d747-e67a-4120-9e78-585dad838531/volumes"
	Dec 10 06:13:31 addons-241520 kubelet[1999]: I1210 06:13:31.453114    1999 scope.go:117] "RemoveContainer" containerID="f8f088deefa6d96e2caa8c16153f3ef98e044f40dbc45111a9246819e49ae829"
	Dec 10 06:13:31 addons-241520 kubelet[1999]: I1210 06:13:31.478895    1999 scope.go:117] "RemoveContainer" containerID="01c67e625a86bb68cd324e94d06efd83e84463e7e67ba1541db364be00b939f6"
	Dec 10 06:13:31 addons-241520 kubelet[1999]: E1210 06:13:31.497413    1999 manager.go:1116] Failed to create existing container: /crio/crio-f8f088deefa6d96e2caa8c16153f3ef98e044f40dbc45111a9246819e49ae829: Error finding container f8f088deefa6d96e2caa8c16153f3ef98e044f40dbc45111a9246819e49ae829: Status 404 returned error can't find the container with id f8f088deefa6d96e2caa8c16153f3ef98e044f40dbc45111a9246819e49ae829
	Dec 10 06:13:40 addons-241520 kubelet[1999]: I1210 06:13:40.339950    1999 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-qbztj" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 06:13:47 addons-241520 kubelet[1999]: I1210 06:13:47.341115    1999 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-jv6bp" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 06:14:01 addons-241520 kubelet[1999]: I1210 06:14:01.345531    1999 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-pfbv5" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 06:14:57 addons-241520 kubelet[1999]: I1210 06:14:57.000063    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf5dm\" (UniqueName: \"kubernetes.io/projected/70669ccd-53bb-4d4e-8c05-9c2273ade686-kube-api-access-gf5dm\") pod \"hello-world-app-5d498dc89-zb98t\" (UID: \"70669ccd-53bb-4d4e-8c05-9c2273ade686\") " pod="default/hello-world-app-5d498dc89-zb98t"
	Dec 10 06:14:57 addons-241520 kubelet[1999]: I1210 06:14:57.001349    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/70669ccd-53bb-4d4e-8c05-9c2273ade686-gcp-creds\") pod \"hello-world-app-5d498dc89-zb98t\" (UID: \"70669ccd-53bb-4d4e-8c05-9c2273ade686\") " pod="default/hello-world-app-5d498dc89-zb98t"
	Dec 10 06:14:57 addons-241520 kubelet[1999]: I1210 06:14:57.340203    1999 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-qbztj" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e] <==
	W1210 06:14:34.070443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:36.073654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:36.080240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:38.083657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:38.088749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:40.091623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:40.099669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:42.106302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:42.112726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:44.115764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:44.120328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:46.123962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:46.130664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:48.133331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:48.140180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:50.143774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:50.148828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:52.152331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:52.159683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:54.163193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:54.167902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:56.171635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:56.176387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:58.181716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:14:58.188147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-241520 -n addons-241520
helpers_test.go:270: (dbg) Run:  kubectl --context addons-241520 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-hzj5c ingress-nginx-admission-patch-pvxz6 registry-creds-764b6fb674-xsqwd
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-241520 describe pod ingress-nginx-admission-create-hzj5c ingress-nginx-admission-patch-pvxz6 registry-creds-764b6fb674-xsqwd
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-241520 describe pod ingress-nginx-admission-create-hzj5c ingress-nginx-admission-patch-pvxz6 registry-creds-764b6fb674-xsqwd: exit status 1 (322.253389ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hzj5c" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-pvxz6" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-xsqwd" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-241520 describe pod ingress-nginx-admission-create-hzj5c ingress-nginx-admission-patch-pvxz6 registry-creds-764b6fb674-xsqwd: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-241520 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (355.402169ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:15:00.791658  375629 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:15:00.792637  375629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:00.792707  375629 out.go:374] Setting ErrFile to fd 2...
	I1210 06:15:00.792731  375629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:00.793056  375629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:15:00.793498  375629 mustload.go:66] Loading cluster: addons-241520
	I1210 06:15:00.794006  375629 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:00.794053  375629 addons.go:622] checking whether the cluster is paused
	I1210 06:15:00.794216  375629 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:00.794289  375629 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:15:00.795149  375629 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:15:00.818888  375629 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:00.818975  375629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:15:00.840115  375629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:15:00.966395  375629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:15:00.966513  375629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:15:01.017529  375629 cri.go:89] found id: "e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea"
	I1210 06:15:01.017552  375629 cri.go:89] found id: "15f67f2cc14b2c44db28ba03b966823a38ba5501db1d7b2f9c2e0dacfd4ed1b1"
	I1210 06:15:01.017558  375629 cri.go:89] found id: "706727cbc03fa4d0a9bfef0bf8e680d6d87db98a72d0d1e64871123e56c84474"
	I1210 06:15:01.017563  375629 cri.go:89] found id: "32f373b06842fe1cd3594c2e992c1a68b946614f87ec5014e49bd7b5c6fc9597"
	I1210 06:15:01.017578  375629 cri.go:89] found id: "cc2087138549b52903121596d8e7319535780a3b85708b719e3170b8e961b18c"
	I1210 06:15:01.017586  375629 cri.go:89] found id: "3edcc847365a8156b3cc6d2fd2aefa985fe161b63c86a2c627e7e5d5b700bbb3"
	I1210 06:15:01.017590  375629 cri.go:89] found id: "e9f72b624d9a04a42ed512b19992407ac0e1076bf5f9d035f9f7ea79d5b54533"
	I1210 06:15:01.017594  375629 cri.go:89] found id: "b3d13279bb1f98801f86c5a0bc4f1d1818687f61d834d6108b96a51de639d2ad"
	I1210 06:15:01.017600  375629 cri.go:89] found id: "b11ba380657e90d096767d69b45a8f8c192d3575bd2bb1300a880a5e46be0218"
	I1210 06:15:01.017618  375629 cri.go:89] found id: "7c4c997d687b5587da6adee9ccf8eefbacea6a48c77c823e1449a7aeccb74825"
	I1210 06:15:01.017628  375629 cri.go:89] found id: "7cf2b1b068ab59188e8c5fb5b292a128274569b06b59310a10102bb992ec0ee8"
	I1210 06:15:01.017632  375629 cri.go:89] found id: "ec50b512c8e8c3fe1f2a4815e23cd7b44032538e1b51724eaf4397367fde1033"
	I1210 06:15:01.017635  375629 cri.go:89] found id: "fcb9b12f636ffd5f790e8e1e75b50ea39f36342e5170530163aa43601f9a319f"
	I1210 06:15:01.017639  375629 cri.go:89] found id: "5bf1539b0ce43fe90678001281ab38e24a894f5137e22c376c0ddbda31d3a327"
	I1210 06:15:01.017646  375629 cri.go:89] found id: "ccf62bd56b5d147a9c3fd61622c7c024ac32876116a8d90e388b5ab69b50d5b8"
	I1210 06:15:01.017656  375629 cri.go:89] found id: "c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749"
	I1210 06:15:01.017659  375629 cri.go:89] found id: "75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e"
	I1210 06:15:01.017666  375629 cri.go:89] found id: "c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7"
	I1210 06:15:01.017669  375629 cri.go:89] found id: "ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f"
	I1210 06:15:01.017673  375629 cri.go:89] found id: "a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af"
	I1210 06:15:01.017677  375629 cri.go:89] found id: "a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554"
	I1210 06:15:01.017681  375629 cri.go:89] found id: "ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272"
	I1210 06:15:01.017684  375629 cri.go:89] found id: "88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f"
	I1210 06:15:01.017687  375629 cri.go:89] found id: ""
	I1210 06:15:01.017766  375629 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:15:01.044887  375629 out.go:203] 
	W1210 06:15:01.047897  375629 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:15:01.047928  375629 out.go:285] * 
	* 
	W1210 06:15:01.053345  375629 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:15:01.056303  375629 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-241520 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-241520 addons disable ingress --alsologtostderr -v=1: exit status 11 (302.503034ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:15:01.132726  375682 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:15:01.133486  375682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:01.133505  375682 out.go:374] Setting ErrFile to fd 2...
	I1210 06:15:01.133513  375682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:01.133843  375682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:15:01.134228  375682 mustload.go:66] Loading cluster: addons-241520
	I1210 06:15:01.134709  375682 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:01.134735  375682 addons.go:622] checking whether the cluster is paused
	I1210 06:15:01.134913  375682 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:15:01.134931  375682 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:15:01.135540  375682 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:15:01.155861  375682 ssh_runner.go:195] Run: systemctl --version
	I1210 06:15:01.155932  375682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:15:01.176583  375682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:15:01.289018  375682 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:15:01.289158  375682 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:15:01.331204  375682 cri.go:89] found id: "e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea"
	I1210 06:15:01.331227  375682 cri.go:89] found id: "15f67f2cc14b2c44db28ba03b966823a38ba5501db1d7b2f9c2e0dacfd4ed1b1"
	I1210 06:15:01.331232  375682 cri.go:89] found id: "706727cbc03fa4d0a9bfef0bf8e680d6d87db98a72d0d1e64871123e56c84474"
	I1210 06:15:01.331240  375682 cri.go:89] found id: "32f373b06842fe1cd3594c2e992c1a68b946614f87ec5014e49bd7b5c6fc9597"
	I1210 06:15:01.331244  375682 cri.go:89] found id: "cc2087138549b52903121596d8e7319535780a3b85708b719e3170b8e961b18c"
	I1210 06:15:01.331248  375682 cri.go:89] found id: "3edcc847365a8156b3cc6d2fd2aefa985fe161b63c86a2c627e7e5d5b700bbb3"
	I1210 06:15:01.331251  375682 cri.go:89] found id: "e9f72b624d9a04a42ed512b19992407ac0e1076bf5f9d035f9f7ea79d5b54533"
	I1210 06:15:01.331257  375682 cri.go:89] found id: "b3d13279bb1f98801f86c5a0bc4f1d1818687f61d834d6108b96a51de639d2ad"
	I1210 06:15:01.331260  375682 cri.go:89] found id: "b11ba380657e90d096767d69b45a8f8c192d3575bd2bb1300a880a5e46be0218"
	I1210 06:15:01.331266  375682 cri.go:89] found id: "7c4c997d687b5587da6adee9ccf8eefbacea6a48c77c823e1449a7aeccb74825"
	I1210 06:15:01.331274  375682 cri.go:89] found id: "7cf2b1b068ab59188e8c5fb5b292a128274569b06b59310a10102bb992ec0ee8"
	I1210 06:15:01.331282  375682 cri.go:89] found id: "ec50b512c8e8c3fe1f2a4815e23cd7b44032538e1b51724eaf4397367fde1033"
	I1210 06:15:01.331285  375682 cri.go:89] found id: "fcb9b12f636ffd5f790e8e1e75b50ea39f36342e5170530163aa43601f9a319f"
	I1210 06:15:01.331289  375682 cri.go:89] found id: "5bf1539b0ce43fe90678001281ab38e24a894f5137e22c376c0ddbda31d3a327"
	I1210 06:15:01.331292  375682 cri.go:89] found id: "ccf62bd56b5d147a9c3fd61622c7c024ac32876116a8d90e388b5ab69b50d5b8"
	I1210 06:15:01.331297  375682 cri.go:89] found id: "c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749"
	I1210 06:15:01.331303  375682 cri.go:89] found id: "75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e"
	I1210 06:15:01.331307  375682 cri.go:89] found id: "c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7"
	I1210 06:15:01.331310  375682 cri.go:89] found id: "ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f"
	I1210 06:15:01.331313  375682 cri.go:89] found id: "a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af"
	I1210 06:15:01.331318  375682 cri.go:89] found id: "a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554"
	I1210 06:15:01.331321  375682 cri.go:89] found id: "ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272"
	I1210 06:15:01.331324  375682 cri.go:89] found id: "88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f"
	I1210 06:15:01.331327  375682 cri.go:89] found id: ""
	I1210 06:15:01.331384  375682 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:15:01.350505  375682 out.go:203] 
	W1210 06:15:01.353423  375682 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:15:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:15:01.353449  375682 out.go:285] * 
	* 
	W1210 06:15:01.358527  375682 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:15:01.361589  375682 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-241520 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.16s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-2srh4" [e67b614b-7b1d-43e6-ae4b-0cb5a95170f9] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003436465s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-241520 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (270.77558ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:13:18.523825  374533 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:13:18.524762  374533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:13:18.524777  374533 out.go:374] Setting ErrFile to fd 2...
	I1210 06:13:18.524782  374533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:13:18.525073  374533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:13:18.525429  374533 mustload.go:66] Loading cluster: addons-241520
	I1210 06:13:18.525826  374533 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:13:18.525847  374533 addons.go:622] checking whether the cluster is paused
	I1210 06:13:18.525954  374533 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:13:18.525969  374533 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:13:18.526504  374533 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:13:18.545014  374533 ssh_runner.go:195] Run: systemctl --version
	I1210 06:13:18.545081  374533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:13:18.562985  374533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:13:18.672222  374533 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:13:18.672322  374533 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:13:18.707337  374533 cri.go:89] found id: "e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea"
	I1210 06:13:18.707361  374533 cri.go:89] found id: "15f67f2cc14b2c44db28ba03b966823a38ba5501db1d7b2f9c2e0dacfd4ed1b1"
	I1210 06:13:18.707367  374533 cri.go:89] found id: "706727cbc03fa4d0a9bfef0bf8e680d6d87db98a72d0d1e64871123e56c84474"
	I1210 06:13:18.707376  374533 cri.go:89] found id: "32f373b06842fe1cd3594c2e992c1a68b946614f87ec5014e49bd7b5c6fc9597"
	I1210 06:13:18.707380  374533 cri.go:89] found id: "cc2087138549b52903121596d8e7319535780a3b85708b719e3170b8e961b18c"
	I1210 06:13:18.707384  374533 cri.go:89] found id: "3edcc847365a8156b3cc6d2fd2aefa985fe161b63c86a2c627e7e5d5b700bbb3"
	I1210 06:13:18.707387  374533 cri.go:89] found id: "e9f72b624d9a04a42ed512b19992407ac0e1076bf5f9d035f9f7ea79d5b54533"
	I1210 06:13:18.707390  374533 cri.go:89] found id: "b3d13279bb1f98801f86c5a0bc4f1d1818687f61d834d6108b96a51de639d2ad"
	I1210 06:13:18.707393  374533 cri.go:89] found id: "b11ba380657e90d096767d69b45a8f8c192d3575bd2bb1300a880a5e46be0218"
	I1210 06:13:18.707399  374533 cri.go:89] found id: "7c4c997d687b5587da6adee9ccf8eefbacea6a48c77c823e1449a7aeccb74825"
	I1210 06:13:18.707403  374533 cri.go:89] found id: "7cf2b1b068ab59188e8c5fb5b292a128274569b06b59310a10102bb992ec0ee8"
	I1210 06:13:18.707406  374533 cri.go:89] found id: "ec50b512c8e8c3fe1f2a4815e23cd7b44032538e1b51724eaf4397367fde1033"
	I1210 06:13:18.707409  374533 cri.go:89] found id: "fcb9b12f636ffd5f790e8e1e75b50ea39f36342e5170530163aa43601f9a319f"
	I1210 06:13:18.707413  374533 cri.go:89] found id: "5bf1539b0ce43fe90678001281ab38e24a894f5137e22c376c0ddbda31d3a327"
	I1210 06:13:18.707422  374533 cri.go:89] found id: "ccf62bd56b5d147a9c3fd61622c7c024ac32876116a8d90e388b5ab69b50d5b8"
	I1210 06:13:18.707427  374533 cri.go:89] found id: "c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749"
	I1210 06:13:18.707430  374533 cri.go:89] found id: "75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e"
	I1210 06:13:18.707435  374533 cri.go:89] found id: "c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7"
	I1210 06:13:18.707438  374533 cri.go:89] found id: "ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f"
	I1210 06:13:18.707441  374533 cri.go:89] found id: "a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af"
	I1210 06:13:18.707446  374533 cri.go:89] found id: "a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554"
	I1210 06:13:18.707450  374533 cri.go:89] found id: "ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272"
	I1210 06:13:18.707453  374533 cri.go:89] found id: "88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f"
	I1210 06:13:18.707456  374533 cri.go:89] found id: ""
	I1210 06:13:18.707523  374533 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:13:18.722964  374533 out.go:203] 
	W1210 06:13:18.726046  374533 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:13:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:13:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:13:18.726073  374533 out.go:285] * 
	* 
	W1210 06:13:18.731152  374533 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:13:18.733886  374533 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-241520 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.146366ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-rwcgk" [c0c02e94-6958-4eea-9eaa-a67bdb5a9e76] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003852144s
addons_test.go:465: (dbg) Run:  kubectl --context addons-241520 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-241520 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (272.219364ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:12:34.983945  373533 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:12:34.984698  373533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:34.984715  373533 out.go:374] Setting ErrFile to fd 2...
	I1210 06:12:34.984721  373533 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:34.985298  373533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:12:34.986485  373533 mustload.go:66] Loading cluster: addons-241520
	I1210 06:12:34.987161  373533 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:34.987197  373533 addons.go:622] checking whether the cluster is paused
	I1210 06:12:34.987368  373533 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:34.987381  373533 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:12:34.987954  373533 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:12:35.012476  373533 ssh_runner.go:195] Run: systemctl --version
	I1210 06:12:35.012548  373533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:12:35.032578  373533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:12:35.140357  373533 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:12:35.140469  373533 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:12:35.172914  373533 cri.go:89] found id: "e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea"
	I1210 06:12:35.172944  373533 cri.go:89] found id: "15f67f2cc14b2c44db28ba03b966823a38ba5501db1d7b2f9c2e0dacfd4ed1b1"
	I1210 06:12:35.172954  373533 cri.go:89] found id: "706727cbc03fa4d0a9bfef0bf8e680d6d87db98a72d0d1e64871123e56c84474"
	I1210 06:12:35.172964  373533 cri.go:89] found id: "32f373b06842fe1cd3594c2e992c1a68b946614f87ec5014e49bd7b5c6fc9597"
	I1210 06:12:35.172968  373533 cri.go:89] found id: "cc2087138549b52903121596d8e7319535780a3b85708b719e3170b8e961b18c"
	I1210 06:12:35.172972  373533 cri.go:89] found id: "3edcc847365a8156b3cc6d2fd2aefa985fe161b63c86a2c627e7e5d5b700bbb3"
	I1210 06:12:35.172975  373533 cri.go:89] found id: "e9f72b624d9a04a42ed512b19992407ac0e1076bf5f9d035f9f7ea79d5b54533"
	I1210 06:12:35.172978  373533 cri.go:89] found id: "b3d13279bb1f98801f86c5a0bc4f1d1818687f61d834d6108b96a51de639d2ad"
	I1210 06:12:35.172981  373533 cri.go:89] found id: "b11ba380657e90d096767d69b45a8f8c192d3575bd2bb1300a880a5e46be0218"
	I1210 06:12:35.172987  373533 cri.go:89] found id: "7c4c997d687b5587da6adee9ccf8eefbacea6a48c77c823e1449a7aeccb74825"
	I1210 06:12:35.172990  373533 cri.go:89] found id: "7cf2b1b068ab59188e8c5fb5b292a128274569b06b59310a10102bb992ec0ee8"
	I1210 06:12:35.172993  373533 cri.go:89] found id: "ec50b512c8e8c3fe1f2a4815e23cd7b44032538e1b51724eaf4397367fde1033"
	I1210 06:12:35.172996  373533 cri.go:89] found id: "fcb9b12f636ffd5f790e8e1e75b50ea39f36342e5170530163aa43601f9a319f"
	I1210 06:12:35.173010  373533 cri.go:89] found id: "5bf1539b0ce43fe90678001281ab38e24a894f5137e22c376c0ddbda31d3a327"
	I1210 06:12:35.173013  373533 cri.go:89] found id: "ccf62bd56b5d147a9c3fd61622c7c024ac32876116a8d90e388b5ab69b50d5b8"
	I1210 06:12:35.173018  373533 cri.go:89] found id: "c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749"
	I1210 06:12:35.173021  373533 cri.go:89] found id: "75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e"
	I1210 06:12:35.173025  373533 cri.go:89] found id: "c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7"
	I1210 06:12:35.173036  373533 cri.go:89] found id: "ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f"
	I1210 06:12:35.173046  373533 cri.go:89] found id: "a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af"
	I1210 06:12:35.173051  373533 cri.go:89] found id: "a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554"
	I1210 06:12:35.173054  373533 cri.go:89] found id: "ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272"
	I1210 06:12:35.173057  373533 cri.go:89] found id: "88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f"
	I1210 06:12:35.173060  373533 cri.go:89] found id: ""
	I1210 06:12:35.173122  373533 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:12:35.189615  373533 out.go:203] 
	W1210 06:12:35.192821  373533 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:12:35.192856  373533 out.go:285] * 
	* 
	W1210 06:12:35.198381  373533 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:12:35.201469  373533 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-241520 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.38s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1210 06:12:34.290180  364265 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1210 06:12:34.296501  364265 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1210 06:12:34.296529  364265 kapi.go:107] duration metric: took 6.36323ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 6.373979ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-241520 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-241520 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [6c54cee3-c394-47f3-863b-b04b018b1b3a] Pending
helpers_test.go:353: "task-pv-pod" [6c54cee3-c394-47f3-863b-b04b018b1b3a] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003595285s
addons_test.go:574: (dbg) Run:  kubectl --context addons-241520 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-241520 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-241520 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-241520 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-241520 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-241520 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-241520 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [a854d747-e67a-4120-9e78-585dad838531] Pending
helpers_test.go:353: "task-pv-pod-restore" [a854d747-e67a-4120-9e78-585dad838531] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [a854d747-e67a-4120-9e78-585dad838531] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003364964s
addons_test.go:616: (dbg) Run:  kubectl --context addons-241520 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-241520 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-241520 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-241520 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (300.864449ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:13:11.944731  374423 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:13:11.945654  374423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:13:11.945677  374423 out.go:374] Setting ErrFile to fd 2...
	I1210 06:13:11.945683  374423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:13:11.946034  374423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:13:11.946389  374423 mustload.go:66] Loading cluster: addons-241520
	I1210 06:13:11.946826  374423 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:13:11.946848  374423 addons.go:622] checking whether the cluster is paused
	I1210 06:13:11.946998  374423 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:13:11.947019  374423 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:13:11.947612  374423 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:13:11.968717  374423 ssh_runner.go:195] Run: systemctl --version
	I1210 06:13:11.968779  374423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:13:11.987413  374423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:13:12.104637  374423 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:13:12.104767  374423 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:13:12.163022  374423 cri.go:89] found id: "e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea"
	I1210 06:13:12.163097  374423 cri.go:89] found id: "15f67f2cc14b2c44db28ba03b966823a38ba5501db1d7b2f9c2e0dacfd4ed1b1"
	I1210 06:13:12.163116  374423 cri.go:89] found id: "706727cbc03fa4d0a9bfef0bf8e680d6d87db98a72d0d1e64871123e56c84474"
	I1210 06:13:12.163137  374423 cri.go:89] found id: "32f373b06842fe1cd3594c2e992c1a68b946614f87ec5014e49bd7b5c6fc9597"
	I1210 06:13:12.163159  374423 cri.go:89] found id: "cc2087138549b52903121596d8e7319535780a3b85708b719e3170b8e961b18c"
	I1210 06:13:12.163193  374423 cri.go:89] found id: "3edcc847365a8156b3cc6d2fd2aefa985fe161b63c86a2c627e7e5d5b700bbb3"
	I1210 06:13:12.163213  374423 cri.go:89] found id: "e9f72b624d9a04a42ed512b19992407ac0e1076bf5f9d035f9f7ea79d5b54533"
	I1210 06:13:12.163233  374423 cri.go:89] found id: "b3d13279bb1f98801f86c5a0bc4f1d1818687f61d834d6108b96a51de639d2ad"
	I1210 06:13:12.163259  374423 cri.go:89] found id: "b11ba380657e90d096767d69b45a8f8c192d3575bd2bb1300a880a5e46be0218"
	I1210 06:13:12.163282  374423 cri.go:89] found id: "7c4c997d687b5587da6adee9ccf8eefbacea6a48c77c823e1449a7aeccb74825"
	I1210 06:13:12.163301  374423 cri.go:89] found id: "7cf2b1b068ab59188e8c5fb5b292a128274569b06b59310a10102bb992ec0ee8"
	I1210 06:13:12.163319  374423 cri.go:89] found id: "ec50b512c8e8c3fe1f2a4815e23cd7b44032538e1b51724eaf4397367fde1033"
	I1210 06:13:12.163346  374423 cri.go:89] found id: "fcb9b12f636ffd5f790e8e1e75b50ea39f36342e5170530163aa43601f9a319f"
	I1210 06:13:12.163371  374423 cri.go:89] found id: "5bf1539b0ce43fe90678001281ab38e24a894f5137e22c376c0ddbda31d3a327"
	I1210 06:13:12.163391  374423 cri.go:89] found id: "ccf62bd56b5d147a9c3fd61622c7c024ac32876116a8d90e388b5ab69b50d5b8"
	I1210 06:13:12.163422  374423 cri.go:89] found id: "c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749"
	I1210 06:13:12.163450  374423 cri.go:89] found id: "75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e"
	I1210 06:13:12.163478  374423 cri.go:89] found id: "c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7"
	I1210 06:13:12.163505  374423 cri.go:89] found id: "ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f"
	I1210 06:13:12.163522  374423 cri.go:89] found id: "a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af"
	I1210 06:13:12.163544  374423 cri.go:89] found id: "a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554"
	I1210 06:13:12.163563  374423 cri.go:89] found id: "ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272"
	I1210 06:13:12.163592  374423 cri.go:89] found id: "88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f"
	I1210 06:13:12.163611  374423 cri.go:89] found id: ""
	I1210 06:13:12.163690  374423 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:13:12.182787  374423 out.go:203] 
	W1210 06:13:12.185724  374423 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:13:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:13:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:13:12.185759  374423 out.go:285] * 
	* 
	W1210 06:13:12.191048  374423 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:13:12.193942  374423 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-241520 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-241520 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (264.082567ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:13:12.249938  374474 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:13:12.250727  374474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:13:12.250754  374474 out.go:374] Setting ErrFile to fd 2...
	I1210 06:13:12.250762  374474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:13:12.251179  374474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:13:12.251594  374474 mustload.go:66] Loading cluster: addons-241520
	I1210 06:13:12.252397  374474 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:13:12.252418  374474 addons.go:622] checking whether the cluster is paused
	I1210 06:13:12.252566  374474 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:13:12.252628  374474 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:13:12.253824  374474 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:13:12.273341  374474 ssh_runner.go:195] Run: systemctl --version
	I1210 06:13:12.273405  374474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:13:12.290477  374474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:13:12.400046  374474 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:13:12.400136  374474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:13:12.432445  374474 cri.go:89] found id: "e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea"
	I1210 06:13:12.432471  374474 cri.go:89] found id: "15f67f2cc14b2c44db28ba03b966823a38ba5501db1d7b2f9c2e0dacfd4ed1b1"
	I1210 06:13:12.432477  374474 cri.go:89] found id: "706727cbc03fa4d0a9bfef0bf8e680d6d87db98a72d0d1e64871123e56c84474"
	I1210 06:13:12.432481  374474 cri.go:89] found id: "32f373b06842fe1cd3594c2e992c1a68b946614f87ec5014e49bd7b5c6fc9597"
	I1210 06:13:12.432488  374474 cri.go:89] found id: "cc2087138549b52903121596d8e7319535780a3b85708b719e3170b8e961b18c"
	I1210 06:13:12.432493  374474 cri.go:89] found id: "3edcc847365a8156b3cc6d2fd2aefa985fe161b63c86a2c627e7e5d5b700bbb3"
	I1210 06:13:12.432496  374474 cri.go:89] found id: "e9f72b624d9a04a42ed512b19992407ac0e1076bf5f9d035f9f7ea79d5b54533"
	I1210 06:13:12.432500  374474 cri.go:89] found id: "b3d13279bb1f98801f86c5a0bc4f1d1818687f61d834d6108b96a51de639d2ad"
	I1210 06:13:12.432503  374474 cri.go:89] found id: "b11ba380657e90d096767d69b45a8f8c192d3575bd2bb1300a880a5e46be0218"
	I1210 06:13:12.432509  374474 cri.go:89] found id: "7c4c997d687b5587da6adee9ccf8eefbacea6a48c77c823e1449a7aeccb74825"
	I1210 06:13:12.432512  374474 cri.go:89] found id: "7cf2b1b068ab59188e8c5fb5b292a128274569b06b59310a10102bb992ec0ee8"
	I1210 06:13:12.432516  374474 cri.go:89] found id: "ec50b512c8e8c3fe1f2a4815e23cd7b44032538e1b51724eaf4397367fde1033"
	I1210 06:13:12.432519  374474 cri.go:89] found id: "fcb9b12f636ffd5f790e8e1e75b50ea39f36342e5170530163aa43601f9a319f"
	I1210 06:13:12.432523  374474 cri.go:89] found id: "5bf1539b0ce43fe90678001281ab38e24a894f5137e22c376c0ddbda31d3a327"
	I1210 06:13:12.432527  374474 cri.go:89] found id: "ccf62bd56b5d147a9c3fd61622c7c024ac32876116a8d90e388b5ab69b50d5b8"
	I1210 06:13:12.432540  374474 cri.go:89] found id: "c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749"
	I1210 06:13:12.432544  374474 cri.go:89] found id: "75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e"
	I1210 06:13:12.432549  374474 cri.go:89] found id: "c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7"
	I1210 06:13:12.432552  374474 cri.go:89] found id: "ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f"
	I1210 06:13:12.432555  374474 cri.go:89] found id: "a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af"
	I1210 06:13:12.432561  374474 cri.go:89] found id: "a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554"
	I1210 06:13:12.432567  374474 cri.go:89] found id: "ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272"
	I1210 06:13:12.432570  374474 cri.go:89] found id: "88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f"
	I1210 06:13:12.432573  374474 cri.go:89] found id: ""
	I1210 06:13:12.432628  374474 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:13:12.447648  374474 out.go:203] 
	W1210 06:13:12.450523  374474 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:13:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:13:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:13:12.450551  374474 out.go:285] * 
	* 
	W1210 06:13:12.455569  374474 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:13:12.458797  374474 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-241520 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (38.18s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-241520 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-241520 --alsologtostderr -v=1: exit status 11 (284.638689ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:12:09.682836  372350 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:12:09.684841  372350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:09.684908  372350 out.go:374] Setting ErrFile to fd 2...
	I1210 06:12:09.684934  372350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:09.685359  372350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:12:09.685750  372350 mustload.go:66] Loading cluster: addons-241520
	I1210 06:12:09.686245  372350 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:09.686287  372350 addons.go:622] checking whether the cluster is paused
	I1210 06:12:09.686438  372350 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:09.686471  372350 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:12:09.687016  372350 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:12:09.716101  372350 ssh_runner.go:195] Run: systemctl --version
	I1210 06:12:09.716157  372350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:12:09.735220  372350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:12:09.839841  372350 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:12:09.839945  372350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:12:09.872361  372350 cri.go:89] found id: "e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea"
	I1210 06:12:09.872429  372350 cri.go:89] found id: "15f67f2cc14b2c44db28ba03b966823a38ba5501db1d7b2f9c2e0dacfd4ed1b1"
	I1210 06:12:09.872448  372350 cri.go:89] found id: "706727cbc03fa4d0a9bfef0bf8e680d6d87db98a72d0d1e64871123e56c84474"
	I1210 06:12:09.872467  372350 cri.go:89] found id: "32f373b06842fe1cd3594c2e992c1a68b946614f87ec5014e49bd7b5c6fc9597"
	I1210 06:12:09.872487  372350 cri.go:89] found id: "cc2087138549b52903121596d8e7319535780a3b85708b719e3170b8e961b18c"
	I1210 06:12:09.872519  372350 cri.go:89] found id: "3edcc847365a8156b3cc6d2fd2aefa985fe161b63c86a2c627e7e5d5b700bbb3"
	I1210 06:12:09.872542  372350 cri.go:89] found id: "e9f72b624d9a04a42ed512b19992407ac0e1076bf5f9d035f9f7ea79d5b54533"
	I1210 06:12:09.872562  372350 cri.go:89] found id: "b3d13279bb1f98801f86c5a0bc4f1d1818687f61d834d6108b96a51de639d2ad"
	I1210 06:12:09.872581  372350 cri.go:89] found id: "b11ba380657e90d096767d69b45a8f8c192d3575bd2bb1300a880a5e46be0218"
	I1210 06:12:09.872621  372350 cri.go:89] found id: "7c4c997d687b5587da6adee9ccf8eefbacea6a48c77c823e1449a7aeccb74825"
	I1210 06:12:09.872643  372350 cri.go:89] found id: "7cf2b1b068ab59188e8c5fb5b292a128274569b06b59310a10102bb992ec0ee8"
	I1210 06:12:09.872663  372350 cri.go:89] found id: "ec50b512c8e8c3fe1f2a4815e23cd7b44032538e1b51724eaf4397367fde1033"
	I1210 06:12:09.872682  372350 cri.go:89] found id: "fcb9b12f636ffd5f790e8e1e75b50ea39f36342e5170530163aa43601f9a319f"
	I1210 06:12:09.872702  372350 cri.go:89] found id: "5bf1539b0ce43fe90678001281ab38e24a894f5137e22c376c0ddbda31d3a327"
	I1210 06:12:09.872728  372350 cri.go:89] found id: "ccf62bd56b5d147a9c3fd61622c7c024ac32876116a8d90e388b5ab69b50d5b8"
	I1210 06:12:09.872761  372350 cri.go:89] found id: "c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749"
	I1210 06:12:09.872794  372350 cri.go:89] found id: "75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e"
	I1210 06:12:09.872816  372350 cri.go:89] found id: "c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7"
	I1210 06:12:09.872849  372350 cri.go:89] found id: "ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f"
	I1210 06:12:09.872866  372350 cri.go:89] found id: "a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af"
	I1210 06:12:09.872886  372350 cri.go:89] found id: "a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554"
	I1210 06:12:09.872905  372350 cri.go:89] found id: "ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272"
	I1210 06:12:09.872934  372350 cri.go:89] found id: "88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f"
	I1210 06:12:09.872959  372350 cri.go:89] found id: ""
	I1210 06:12:09.873054  372350 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:12:09.888262  372350 out.go:203] 
	W1210 06:12:09.891265  372350 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:12:09.891292  372350 out.go:285] * 
	* 
	W1210 06:12:09.896499  372350 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:12:09.899287  372350 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-241520 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-241520
helpers_test.go:244: (dbg) docker inspect addons-241520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7dbf6b06e352dbd6152202fc867d8ff6338e749c3d61ff59432146a48f5744c9",
	        "Created": "2025-12-10T06:09:53.53362706Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 365685,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:09:53.601853467Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7dbf6b06e352dbd6152202fc867d8ff6338e749c3d61ff59432146a48f5744c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7dbf6b06e352dbd6152202fc867d8ff6338e749c3d61ff59432146a48f5744c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/7dbf6b06e352dbd6152202fc867d8ff6338e749c3d61ff59432146a48f5744c9/hosts",
	        "LogPath": "/var/lib/docker/containers/7dbf6b06e352dbd6152202fc867d8ff6338e749c3d61ff59432146a48f5744c9/7dbf6b06e352dbd6152202fc867d8ff6338e749c3d61ff59432146a48f5744c9-json.log",
	        "Name": "/addons-241520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-241520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-241520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7dbf6b06e352dbd6152202fc867d8ff6338e749c3d61ff59432146a48f5744c9",
	                "LowerDir": "/var/lib/docker/overlay2/967d8bfa3f0b3c27b0b58d3acbfb1200cca1e98ed4b61902757132649cc8a30f-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/967d8bfa3f0b3c27b0b58d3acbfb1200cca1e98ed4b61902757132649cc8a30f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/967d8bfa3f0b3c27b0b58d3acbfb1200cca1e98ed4b61902757132649cc8a30f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/967d8bfa3f0b3c27b0b58d3acbfb1200cca1e98ed4b61902757132649cc8a30f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-241520",
	                "Source": "/var/lib/docker/volumes/addons-241520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-241520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-241520",
	                "name.minikube.sigs.k8s.io": "addons-241520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1973f5240a275d3bf2704705407b46fa337b7c75daf0b14a721ed8ffbaa5367a",
	            "SandboxKey": "/var/run/docker/netns/1973f5240a27",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-241520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:9a:2d:49:a0:26",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "845a265d0e7b5e3a9437720e96236d256b61ca93174566fc563d2fd856a8dc10",
	                    "EndpointID": "76a5c1d9b65a6e58e5f2a25ed27a92da9d44f890eaa210c669fdc5cd280fb488",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-241520",
	                        "7dbf6b06e352"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-241520 -n addons-241520
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-241520 logs -n 25: (1.431804269s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-789794 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-789794   │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ delete  │ -p download-only-789794                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-789794   │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ start   │ -o=json --download-only -p download-only-091542 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-091542   │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ delete  │ -p download-only-091542                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-091542   │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ start   │ -o=json --download-only -p download-only-433687 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                           │ download-only-433687   │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ delete  │ -p download-only-433687                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-433687   │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ delete  │ -p download-only-789794                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-789794   │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ delete  │ -p download-only-091542                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-091542   │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ delete  │ -p download-only-433687                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-433687   │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ start   │ --download-only -p download-docker-800978 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-800978 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	│ delete  │ -p download-docker-800978                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-800978 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ start   │ --download-only -p binary-mirror-172562 --alsologtostderr --binary-mirror http://127.0.0.1:37171 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-172562   │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	│ delete  │ -p binary-mirror-172562                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-172562   │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ addons  │ disable dashboard -p addons-241520                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	│ addons  │ enable dashboard -p addons-241520                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	│ start   │ -p addons-241520 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:11 UTC │
	│ addons  │ addons-241520 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:11 UTC │                     │
	│ addons  │ addons-241520 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	│ addons  │ enable headlamp -p addons-241520 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-241520          │ jenkins │ v1.37.0 │ 10 Dec 25 06:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:09:32
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:09:32.529123  365349 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:09:32.529326  365349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:09:32.529350  365349 out.go:374] Setting ErrFile to fd 2...
	I1210 06:09:32.529372  365349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:09:32.529770  365349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:09:32.530383  365349 out.go:368] Setting JSON to false
	I1210 06:09:32.531773  365349 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10325,"bootTime":1765336648,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:09:32.531855  365349 start.go:143] virtualization:  
	I1210 06:09:32.535070  365349 out.go:179] * [addons-241520] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:09:32.539042  365349 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:09:32.539183  365349 notify.go:221] Checking for updates...
	I1210 06:09:32.545122  365349 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:09:32.548050  365349 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:09:32.550947  365349 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:09:32.553883  365349 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:09:32.556785  365349 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:09:32.559912  365349 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:09:32.593384  365349 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:09:32.593553  365349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:09:32.654346  365349 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 06:09:32.643860997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:09:32.654460  365349 docker.go:319] overlay module found
	I1210 06:09:32.657509  365349 out.go:179] * Using the docker driver based on user configuration
	I1210 06:09:32.660314  365349 start.go:309] selected driver: docker
	I1210 06:09:32.660341  365349 start.go:927] validating driver "docker" against <nil>
	I1210 06:09:32.660355  365349 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:09:32.661114  365349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:09:32.717367  365349 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 06:09:32.707347217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:09:32.717535  365349 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:09:32.717752  365349 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:09:32.720671  365349 out.go:179] * Using Docker driver with root privileges
	I1210 06:09:32.723697  365349 cni.go:84] Creating CNI manager for ""
	I1210 06:09:32.723769  365349 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:09:32.723782  365349 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:09:32.723855  365349 start.go:353] cluster config:
	{Name:addons-241520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-241520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1210 06:09:32.726925  365349 out.go:179] * Starting "addons-241520" primary control-plane node in "addons-241520" cluster
	I1210 06:09:32.729874  365349 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:09:32.732804  365349 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:09:32.735675  365349 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:09:32.735777  365349 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:09:32.750125  365349 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 06:09:32.750266  365349 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 06:09:32.750294  365349 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1210 06:09:32.750305  365349 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1210 06:09:32.750313  365349 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1210 06:09:32.750323  365349 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	W1210 06:09:32.789648  365349 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 06:09:32.837274  365349 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 06:09:32.837680  365349 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/config.json ...
	I1210 06:09:32.837737  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/config.json: {Name:mk64eb852ee62fa3403e6dbb125af50407f65a90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:09:32.838038  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:09:33.009068  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:09:33.175029  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:09:33.360055  365349 cache.go:107] acquiring lock: {Name:mk02212e897dba66869d457b3bbeea186c9977d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360151  365349 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360245  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 06:09:33.360263  365349 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3" took 226.293µs
	I1210 06:09:33.360365  365349 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 06:09:33.360286  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:09:33.360387  365349 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 414.654µs
	I1210 06:09:33.360400  365349 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:09:33.360306  365349 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360454  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:09:33.360467  365349 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 163.498µs
	I1210 06:09:33.360474  365349 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:09:33.360341  365349 cache.go:107] acquiring lock: {Name:mk528ea302435a8d73a952727ebcf4c5d4bd15a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360763  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 06:09:33.360779  365349 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 438.433µs
	I1210 06:09:33.360787  365349 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 06:09:33.360613  365349 cache.go:107] acquiring lock: {Name:mkcde84ea8e341b56c14a9da0ddd80f253a2bcdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360835  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 06:09:33.360848  365349 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3" took 246.232µs
	I1210 06:09:33.360855  365349 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 06:09:33.360642  365349 cache.go:107] acquiring lock: {Name:mkd358dfd00c757fa5e4489a81c6d55b1de8de5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360658  365349 cache.go:107] acquiring lock: {Name:mk1e8ea2965a60a26ea6e464eb610a6affff1a11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360935  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 06:09:33.360940  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 06:09:33.360943  365349 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3" took 286.045µs
	I1210 06:09:33.360950  365349 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 06:09:33.360325  365349 cache.go:107] acquiring lock: {Name:mk028ba2317f3b1c037987bf153e02fff8ae3e15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:33.360952  365349 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3" took 311.037µs
	I1210 06:09:33.360968  365349 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 06:09:33.360973  365349 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 06:09:33.360978  365349 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 654.527µs
	I1210 06:09:33.360984  365349 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 06:09:33.361011  365349 cache.go:87] Successfully saved all images to host disk.
	I1210 06:09:51.014177  365349 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1210 06:09:51.014224  365349 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:09:51.014280  365349 start.go:360] acquireMachinesLock for addons-241520: {Name:mke5e792482575a95955cce7f5f982a5b20edf07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:09:51.014420  365349 start.go:364] duration metric: took 113.684µs to acquireMachinesLock for "addons-241520"
	I1210 06:09:51.014462  365349 start.go:93] Provisioning new machine with config: &{Name:addons-241520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-241520 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:09:51.014542  365349 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:09:51.018199  365349 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1210 06:09:51.018466  365349 start.go:159] libmachine.API.Create for "addons-241520" (driver="docker")
	I1210 06:09:51.018505  365349 client.go:173] LocalClient.Create starting
	I1210 06:09:51.018617  365349 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem
	I1210 06:09:51.211349  365349 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem
	I1210 06:09:51.538996  365349 cli_runner.go:164] Run: docker network inspect addons-241520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:09:51.554794  365349 cli_runner.go:211] docker network inspect addons-241520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:09:51.554875  365349 network_create.go:284] running [docker network inspect addons-241520] to gather additional debugging logs...
	I1210 06:09:51.554910  365349 cli_runner.go:164] Run: docker network inspect addons-241520
	W1210 06:09:51.570430  365349 cli_runner.go:211] docker network inspect addons-241520 returned with exit code 1
	I1210 06:09:51.570463  365349 network_create.go:287] error running [docker network inspect addons-241520]: docker network inspect addons-241520: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-241520 not found
	I1210 06:09:51.570477  365349 network_create.go:289] output of [docker network inspect addons-241520]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-241520 not found
	
	** /stderr **
	I1210 06:09:51.570582  365349 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:09:51.586211  365349 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a21be0}
	I1210 06:09:51.586257  365349 network_create.go:124] attempt to create docker network addons-241520 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 06:09:51.586315  365349 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-241520 addons-241520
	I1210 06:09:51.646295  365349 network_create.go:108] docker network addons-241520 192.168.49.0/24 created
	I1210 06:09:51.646332  365349 kic.go:121] calculated static IP "192.168.49.2" for the "addons-241520" container
	I1210 06:09:51.646442  365349 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:09:51.661902  365349 cli_runner.go:164] Run: docker volume create addons-241520 --label name.minikube.sigs.k8s.io=addons-241520 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:09:51.679826  365349 oci.go:103] Successfully created a docker volume addons-241520
	I1210 06:09:51.679938  365349 cli_runner.go:164] Run: docker run --rm --name addons-241520-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-241520 --entrypoint /usr/bin/test -v addons-241520:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:09:53.461927  365349 cli_runner.go:217] Completed: docker run --rm --name addons-241520-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-241520 --entrypoint /usr/bin/test -v addons-241520:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.781948447s)
	I1210 06:09:53.461962  365349 oci.go:107] Successfully prepared a docker volume addons-241520
	I1210 06:09:53.462014  365349 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	W1210 06:09:53.462151  365349 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 06:09:53.462259  365349 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:09:53.515716  365349 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-241520 --name addons-241520 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-241520 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-241520 --network addons-241520 --ip 192.168.49.2 --volume addons-241520:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:09:53.827138  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Running}}
	I1210 06:09:53.849570  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:09:53.870491  365349 cli_runner.go:164] Run: docker exec addons-241520 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:09:53.924827  365349 oci.go:144] the created container "addons-241520" has a running status.
	I1210 06:09:53.924858  365349 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa...
	I1210 06:09:54.683129  365349 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:09:54.703197  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:09:54.720868  365349 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:09:54.720895  365349 kic_runner.go:114] Args: [docker exec --privileged addons-241520 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:09:54.762275  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:09:54.782380  365349 machine.go:94] provisionDockerMachine start ...
	I1210 06:09:54.782485  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:54.799652  365349 main.go:143] libmachine: Using SSH client type: native
	I1210 06:09:54.799992  365349 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1210 06:09:54.800009  365349 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:09:54.800704  365349 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 06:09:57.952976  365349 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-241520
	
	I1210 06:09:57.952998  365349 ubuntu.go:182] provisioning hostname "addons-241520"
	I1210 06:09:57.953064  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:57.969971  365349 main.go:143] libmachine: Using SSH client type: native
	I1210 06:09:57.970287  365349 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1210 06:09:57.970305  365349 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-241520 && echo "addons-241520" | sudo tee /etc/hostname
	I1210 06:09:58.130625  365349 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-241520
	
	I1210 06:09:58.130717  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:58.147988  365349 main.go:143] libmachine: Using SSH client type: native
	I1210 06:09:58.148312  365349 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1210 06:09:58.148334  365349 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-241520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-241520/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-241520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:09:58.300080  365349 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:09:58.300108  365349 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-362392/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-362392/.minikube}
	I1210 06:09:58.300130  365349 ubuntu.go:190] setting up certificates
	I1210 06:09:58.300140  365349 provision.go:84] configureAuth start
	I1210 06:09:58.300202  365349 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-241520
	I1210 06:09:58.321227  365349 provision.go:143] copyHostCerts
	I1210 06:09:58.321311  365349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem (1675 bytes)
	I1210 06:09:58.321438  365349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem (1078 bytes)
	I1210 06:09:58.321505  365349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem (1123 bytes)
	I1210 06:09:58.321556  365349 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem org=jenkins.addons-241520 san=[127.0.0.1 192.168.49.2 addons-241520 localhost minikube]
	I1210 06:09:58.399449  365349 provision.go:177] copyRemoteCerts
	I1210 06:09:58.399513  365349 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:09:58.399558  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:58.419555  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:09:58.525001  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:09:58.542694  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 06:09:58.560204  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:09:58.578182  365349 provision.go:87] duration metric: took 278.027858ms to configureAuth
	I1210 06:09:58.578256  365349 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:09:58.578484  365349 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:09:58.578605  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:58.595532  365349 main.go:143] libmachine: Using SSH client type: native
	I1210 06:09:58.595854  365349 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1210 06:09:58.595877  365349 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:09:58.892851  365349 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:09:58.892932  365349 machine.go:97] duration metric: took 4.110524386s to provisionDockerMachine
	I1210 06:09:58.892949  365349 client.go:176] duration metric: took 7.874436356s to LocalClient.Create
	I1210 06:09:58.892966  365349 start.go:167] duration metric: took 7.874501875s to libmachine.API.Create "addons-241520"
	I1210 06:09:58.892974  365349 start.go:293] postStartSetup for "addons-241520" (driver="docker")
	I1210 06:09:58.892997  365349 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:09:58.893079  365349 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:09:58.893146  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:58.910813  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:09:59.017550  365349 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:09:59.020941  365349 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:09:59.020972  365349 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:09:59.020985  365349 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/addons for local assets ...
	I1210 06:09:59.021054  365349 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/files for local assets ...
	I1210 06:09:59.021082  365349 start.go:296] duration metric: took 128.102921ms for postStartSetup
	I1210 06:09:59.021428  365349 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-241520
	I1210 06:09:59.038977  365349 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/config.json ...
	I1210 06:09:59.039268  365349 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:09:59.039331  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:59.057169  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:09:59.158157  365349 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:09:59.162898  365349 start.go:128] duration metric: took 8.148339639s to createHost
	I1210 06:09:59.162966  365349 start.go:83] releasing machines lock for "addons-241520", held for 8.148530396s
	I1210 06:09:59.163055  365349 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-241520
	I1210 06:09:59.179938  365349 ssh_runner.go:195] Run: cat /version.json
	I1210 06:09:59.179959  365349 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:09:59.179987  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:59.180019  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:09:59.199694  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:09:59.200295  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:09:59.385403  365349 ssh_runner.go:195] Run: systemctl --version
	I1210 06:09:59.391893  365349 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:09:59.426066  365349 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:09:59.430350  365349 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:09:59.430448  365349 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:09:59.462438  365349 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 06:09:59.462474  365349 start.go:496] detecting cgroup driver to use...
	I1210 06:09:59.462511  365349 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:09:59.462565  365349 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:09:59.479807  365349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:09:59.492248  365349 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:09:59.492330  365349 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:09:59.510336  365349 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:09:59.529552  365349 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:09:59.652062  365349 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:09:59.783482  365349 docker.go:234] disabling docker service ...
	I1210 06:09:59.783547  365349 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:09:59.804975  365349 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:09:59.818744  365349 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:09:59.934522  365349 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:10:00.061730  365349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:10:00.083075  365349 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:10:00.105272  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:10:00.389445  365349 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:10:00.389537  365349 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:00.409708  365349 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:10:00.409795  365349 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:00.431770  365349 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:00.447001  365349 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:00.466326  365349 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:10:00.476995  365349 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:00.493819  365349 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:00.513448  365349 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:10:00.531823  365349 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:10:00.541733  365349 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:10:00.573520  365349 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:10:00.703517  365349 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:10:00.884561  365349 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:10:00.884722  365349 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:10:00.889155  365349 start.go:564] Will wait 60s for crictl version
	I1210 06:10:00.889286  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:00.893134  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:10:00.918284  365349 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:10:00.918431  365349 ssh_runner.go:195] Run: crio --version
	I1210 06:10:00.949097  365349 ssh_runner.go:195] Run: crio --version
	I1210 06:10:00.984010  365349 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 06:10:00.986972  365349 cli_runner.go:164] Run: docker network inspect addons-241520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:10:01.005570  365349 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:10:01.009681  365349 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:10:01.019949  365349 kubeadm.go:884] updating cluster {Name:addons-241520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-241520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:10:01.020138  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:10:01.169844  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:10:01.332453  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:10:01.484763  365349 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 06:10:01.484844  365349 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:10:01.511833  365349 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1210 06:10:01.511861  365349 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 06:10:01.511906  365349 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:01.511931  365349 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:01.512119  365349 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 06:10:01.512141  365349 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:01.512208  365349 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:01.512231  365349 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:01.512300  365349 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:01.512119  365349 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:01.514436  365349 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:01.514918  365349 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:01.515098  365349 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:01.515250  365349 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:01.515398  365349 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:01.515541  365349 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 06:10:01.515683  365349 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:01.516018  365349 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:01.861481  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:01.870674  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:01.890174  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:01.906149  365349 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1210 06:10:01.906239  365349 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:01.906334  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:01.920144  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 06:10:01.920391  365349 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1210 06:10:01.920451  365349 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:01.920493  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:01.950841  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:01.954960  365349 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162" in container runtime
	I1210 06:10:01.955154  365349 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:01.955202  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:01.955222  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:01.959053  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:01.978135  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:01.978297  365349 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 06:10:01.978363  365349 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 06:10:01.978411  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:02.009672  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:02.025954  365349 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896" in container runtime
	I1210 06:10:02.025996  365349 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:02.026050  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:02.026119  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:02.042262  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:02.067464  365349 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22" in container runtime
	I1210 06:10:02.067558  365349 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:02.067643  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:02.072703  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:10:02.072886  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:02.097345  365349 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6" in container runtime
	I1210 06:10:02.097433  365349 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:02.097516  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:02.117315  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:02.117470  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:02.149864  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 06:10:02.149957  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:02.152849  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:02.152917  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:10:02.152975  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 06:10:02.204217  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:02.204325  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 06:10:02.273146  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:02.273263  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1210 06:10:02.273570  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 06:10:02.273326  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:10:02.273349  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1210 06:10:02.273384  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:02.273805  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 06:10:02.297134  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3
	I1210 06:10:02.297488  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 06:10:02.297390  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 06:10:02.349137  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 06:10:02.349238  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 06:10:02.349245  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1210 06:10:02.349314  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 06:10:02.349335  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1210 06:10:02.349386  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 06:10:02.349463  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 06:10:02.349517  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 06:10:02.349543  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3
	I1210 06:10:02.349594  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 06:10:02.349663  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 06:10:02.349711  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (22806528 bytes)
	I1210 06:10:02.434699  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3
	I1210 06:10:02.434814  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 06:10:02.434881  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 06:10:02.434912  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 06:10:02.434953  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 06:10:02.434963  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (24578048 bytes)
	I1210 06:10:02.435025  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3
	I1210 06:10:02.435079  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 06:10:02.514301  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 06:10:02.514389  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (20730880 bytes)
	I1210 06:10:02.523374  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 06:10:02.523414  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (15787008 bytes)
	I1210 06:10:02.568890  365349 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 06:10:02.569020  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1210 06:10:02.744084  365349 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 06:10:02.744309  365349 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:03.073173  365349 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 06:10:03.073322  365349 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:03.073276  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1210 06:10:03.073437  365349 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 06:10:03.073465  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 06:10:03.073544  365349 ssh_runner.go:195] Run: which crictl
	I1210 06:10:04.551583  365349 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.3: (1.478095905s)
	I1210 06:10:04.551620  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 from cache
	I1210 06:10:04.551640  365349 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 06:10:04.551690  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1210 06:10:04.551715  365349 ssh_runner.go:235] Completed: which crictl: (1.478139671s)
	I1210 06:10:04.551790  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:06.152515  365349 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.600801751s)
	I1210 06:10:06.152546  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1210 06:10:06.152549  365349 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.600742434s)
	I1210 06:10:06.152564  365349 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 06:10:06.152615  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1210 06:10:06.152615  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:07.914056  365349 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.76135325s)
	I1210 06:10:07.914136  365349 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:07.914154  365349 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.761523075s)
	I1210 06:10:07.914173  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1210 06:10:07.914191  365349 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 06:10:07.914227  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 06:10:09.232851  365349 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.3: (1.318602981s)
	I1210 06:10:09.232880  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 from cache
	I1210 06:10:09.232898  365349 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 06:10:09.232946  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 06:10:09.233015  365349 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.318869153s)
	I1210 06:10:09.233043  365349 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 06:10:09.233110  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:10:10.399126  365349 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.3: (1.16615352s)
	I1210 06:10:10.399156  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 from cache
	I1210 06:10:10.399174  365349 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 06:10:10.399223  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 06:10:10.399292  365349 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.166173385s)
	I1210 06:10:10.399311  365349 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 06:10:10.399327  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 06:10:11.814179  365349 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.3: (1.414929692s)
	I1210 06:10:11.814209  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 from cache
	I1210 06:10:11.814231  365349 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:10:11.814313  365349 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:10:12.381863  365349 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 06:10:12.381910  365349 cache_images.go:125] Successfully loaded all cached images
	I1210 06:10:12.381917  365349 cache_images.go:94] duration metric: took 10.870040774s to LoadCachedImages
	I1210 06:10:12.381929  365349 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1210 06:10:12.382035  365349 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-241520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-241520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:10:12.382118  365349 ssh_runner.go:195] Run: crio config
	I1210 06:10:12.435192  365349 cni.go:84] Creating CNI manager for ""
	I1210 06:10:12.435310  365349 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:10:12.435337  365349 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:10:12.435362  365349 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-241520 NodeName:addons-241520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:10:12.435504  365349 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-241520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:10:12.435578  365349 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 06:10:12.443806  365349 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 06:10:12.443923  365349 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 06:10:12.452382  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl.sha256
	I1210 06:10:12.452462  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubelet.sha256
	I1210 06:10:12.452489  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 06:10:12.452557  365349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:10:12.452580  365349 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 06:10:12.452635  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 06:10:12.467662  365349 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 06:10:12.467697  365349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 06:10:12.467722  365349 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 06:10:12.467732  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/linux/arm64/v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (71434424 bytes)
	I1210 06:10:12.467699  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/linux/arm64/v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (58130616 bytes)
	I1210 06:10:12.479264  365349 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 06:10:12.479346  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/linux/arm64/v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (56426788 bytes)
	I1210 06:10:13.342341  365349 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:10:13.351793  365349 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 06:10:13.366182  365349 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:10:13.379953  365349 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1210 06:10:13.393705  365349 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:10:13.398099  365349 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:10:13.408916  365349 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:10:13.536069  365349 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:10:13.556613  365349 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520 for IP: 192.168.49.2
	I1210 06:10:13.556638  365349 certs.go:195] generating shared ca certs ...
	I1210 06:10:13.556655  365349 certs.go:227] acquiring lock for ca certs: {Name:mk6fdc2cadbd147112797261b2432f4e1e90b685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:13.556797  365349 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key
	I1210 06:10:13.665642  365349 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt ...
	I1210 06:10:13.665679  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt: {Name:mk3294ca51bc393d6eb474de2127d23ebdb0e000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:13.665919  365349 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key ...
	I1210 06:10:13.665935  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key: {Name:mk3cbf7d8e863061adcb732ebb1f3925124a7d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:13.666024  365349 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key
	I1210 06:10:13.749667  365349 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt ...
	I1210 06:10:13.749713  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt: {Name:mka2a0678c24a34aafc71fb5a32c865f44d9d83b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:13.749918  365349 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key ...
	I1210 06:10:13.749938  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key: {Name:mk6612b7518e0a3b98473aa40d584b0ef31fbdf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:13.750028  365349 certs.go:257] generating profile certs ...
	I1210 06:10:13.750101  365349 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.key
	I1210 06:10:13.750119  365349 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt with IP's: []
	I1210 06:10:14.046427  365349 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt ...
	I1210 06:10:14.046470  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: {Name:mkf4a9c5f2c3da2d57ca27617d7315b5ace6f2a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:14.047463  365349 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.key ...
	I1210 06:10:14.047488  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.key: {Name:mkcadb65cf72cf66fd89d84d0da6d0e60d07aac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:14.047603  365349 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.key.e0455d5b
	I1210 06:10:14.047639  365349 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.crt.e0455d5b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 06:10:14.210195  365349 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.crt.e0455d5b ...
	I1210 06:10:14.210229  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.crt.e0455d5b: {Name:mkbfd561a6d0bb0ea4b99987ccb5a76507ecca44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:14.210414  365349 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.key.e0455d5b ...
	I1210 06:10:14.210431  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.key.e0455d5b: {Name:mkc057dfe18133c542ce4563bbd25ef24d5185d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:14.210520  365349 certs.go:382] copying /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.crt.e0455d5b -> /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.crt
	I1210 06:10:14.210599  365349 certs.go:386] copying /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.key.e0455d5b -> /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.key
	I1210 06:10:14.210652  365349 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.key
	I1210 06:10:14.210672  365349 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.crt with IP's: []
	I1210 06:10:14.348361  365349 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.crt ...
	I1210 06:10:14.348393  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.crt: {Name:mk264999c5af78ee55216c281016f59845db8bbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:14.348574  365349 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.key ...
	I1210 06:10:14.348589  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.key: {Name:mk87f9211b6cb9f59ff85aeea12277e09be68862 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:14.348783  365349 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:10:14.348830  365349 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:10:14.348865  365349 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:10:14.348897  365349 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem (1675 bytes)
	I1210 06:10:14.349503  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:10:14.368204  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:10:14.392042  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:10:14.412581  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:10:14.436763  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 06:10:14.456697  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:10:14.475852  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:10:14.494586  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:10:14.513414  365349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:10:14.531979  365349 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:10:14.545986  365349 ssh_runner.go:195] Run: openssl version
	I1210 06:10:14.552779  365349 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:14.560783  365349 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:10:14.568878  365349 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:14.572974  365349 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:14.573059  365349 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:10:14.614440  365349 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:10:14.622205  365349 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:10:14.630060  365349 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:10:14.633841  365349 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:10:14.633893  365349 kubeadm.go:401] StartCluster: {Name:addons-241520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-241520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:10:14.633971  365349 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:10:14.634032  365349 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:10:14.660845  365349 cri.go:89] found id: ""
	I1210 06:10:14.660923  365349 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:10:14.669263  365349 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:10:14.677560  365349 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:10:14.677650  365349 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:10:14.685660  365349 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:10:14.685725  365349 kubeadm.go:158] found existing configuration files:
	
	I1210 06:10:14.685803  365349 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:10:14.693864  365349 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:10:14.693978  365349 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:10:14.701830  365349 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:10:14.710282  365349 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:10:14.710378  365349 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:10:14.718116  365349 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:10:14.726323  365349 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:10:14.726390  365349 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:10:14.734143  365349 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:10:14.742355  365349 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:10:14.742432  365349 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:10:14.750452  365349 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:10:14.814533  365349 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 06:10:14.814812  365349 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:10:14.885462  365349 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:10:31.984556  365349 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 06:10:31.984617  365349 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:10:31.984709  365349 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:10:31.984768  365349 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:10:31.984807  365349 kubeadm.go:319] OS: Linux
	I1210 06:10:31.984856  365349 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:10:31.984909  365349 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:10:31.984960  365349 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:10:31.985018  365349 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:10:31.985073  365349 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:10:31.985126  365349 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:10:31.985176  365349 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:10:31.985239  365349 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:10:31.985291  365349 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:10:31.985369  365349 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:10:31.985468  365349 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:10:31.985562  365349 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:10:31.985628  365349 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:10:31.988538  365349 out.go:252]   - Generating certificates and keys ...
	I1210 06:10:31.988664  365349 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:10:31.988746  365349 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:10:31.988844  365349 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:10:31.988932  365349 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:10:31.989026  365349 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:10:31.989121  365349 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:10:31.989218  365349 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:10:31.989360  365349 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-241520 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 06:10:31.989450  365349 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:10:31.989596  365349 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-241520 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 06:10:31.989673  365349 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:10:31.989742  365349 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:10:31.989793  365349 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:10:31.989852  365349 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:10:31.989911  365349 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:10:31.990000  365349 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:10:31.990083  365349 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:10:31.990157  365349 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:10:31.990213  365349 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:10:31.990337  365349 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:10:31.990436  365349 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:10:31.993447  365349 out.go:252]   - Booting up control plane ...
	I1210 06:10:31.993595  365349 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:10:31.993736  365349 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:10:31.993821  365349 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:10:31.993927  365349 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:10:31.994020  365349 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:10:31.994182  365349 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:10:31.994319  365349 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:10:31.994364  365349 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:10:31.994506  365349 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:10:31.994614  365349 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:10:31.994672  365349 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003879143s
	I1210 06:10:31.994843  365349 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:10:31.994949  365349 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1210 06:10:31.995048  365349 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:10:31.995138  365349 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 06:10:31.995224  365349 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.047564519s
	I1210 06:10:31.995299  365349 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.798266576s
	I1210 06:10:31.995375  365349 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502206802s
	I1210 06:10:31.995493  365349 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 06:10:31.995632  365349 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 06:10:31.995699  365349 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 06:10:31.995910  365349 kubeadm.go:319] [mark-control-plane] Marking the node addons-241520 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 06:10:31.995973  365349 kubeadm.go:319] [bootstrap-token] Using token: zcli1o.7gec4ombe4uo3w4h
	I1210 06:10:31.999172  365349 out.go:252]   - Configuring RBAC rules ...
	I1210 06:10:31.999482  365349 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 06:10:31.999570  365349 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 06:10:31.999727  365349 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 06:10:31.999855  365349 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 06:10:31.999975  365349 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 06:10:32.000061  365349 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 06:10:32.000176  365349 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 06:10:32.000227  365349 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 06:10:32.000278  365349 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 06:10:32.000282  365349 kubeadm.go:319] 
	I1210 06:10:32.000342  365349 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 06:10:32.000364  365349 kubeadm.go:319] 
	I1210 06:10:32.000446  365349 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 06:10:32.000449  365349 kubeadm.go:319] 
	I1210 06:10:32.000487  365349 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 06:10:32.000553  365349 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 06:10:32.000608  365349 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 06:10:32.000614  365349 kubeadm.go:319] 
	I1210 06:10:32.000681  365349 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 06:10:32.000685  365349 kubeadm.go:319] 
	I1210 06:10:32.000746  365349 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 06:10:32.000750  365349 kubeadm.go:319] 
	I1210 06:10:32.000807  365349 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 06:10:32.000890  365349 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 06:10:32.000959  365349 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 06:10:32.000971  365349 kubeadm.go:319] 
	I1210 06:10:32.001111  365349 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 06:10:32.001433  365349 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 06:10:32.001442  365349 kubeadm.go:319] 
	I1210 06:10:32.001551  365349 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zcli1o.7gec4ombe4uo3w4h \
	I1210 06:10:32.001685  365349 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:51315b15cc463daae0db99738888dd9b68c1a2544d5ab5bde8f25324b73b939c \
	I1210 06:10:32.001707  365349 kubeadm.go:319] 	--control-plane 
	I1210 06:10:32.001711  365349 kubeadm.go:319] 
	I1210 06:10:32.001824  365349 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 06:10:32.001829  365349 kubeadm.go:319] 
	I1210 06:10:32.001935  365349 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zcli1o.7gec4ombe4uo3w4h \
	I1210 06:10:32.002075  365349 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:51315b15cc463daae0db99738888dd9b68c1a2544d5ab5bde8f25324b73b939c 
	I1210 06:10:32.002096  365349 cni.go:84] Creating CNI manager for ""
	I1210 06:10:32.002105  365349 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:10:32.007238  365349 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 06:10:32.010414  365349 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 06:10:32.015634  365349 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1210 06:10:32.015659  365349 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 06:10:32.033603  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 06:10:32.340500  365349 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 06:10:32.340643  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:32.340722  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-241520 minikube.k8s.io/updated_at=2025_12_10T06_10_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=addons-241520 minikube.k8s.io/primary=true
	I1210 06:10:32.535856  365349 ops.go:34] apiserver oom_adj: -16
	I1210 06:10:32.536101  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:33.036878  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:33.536472  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:34.037072  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:34.536549  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:35.036155  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:35.536572  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:36.036702  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:36.536822  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:37.036648  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:37.536933  365349 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:10:37.641862  365349 kubeadm.go:1114] duration metric: took 5.301271459s to wait for elevateKubeSystemPrivileges
	I1210 06:10:37.641902  365349 kubeadm.go:403] duration metric: took 23.008014399s to StartCluster
	I1210 06:10:37.641919  365349 settings.go:142] acquiring lock: {Name:mk50f3096e55008942432e93c6cf4976a70e957e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:37.642044  365349 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:10:37.642464  365349 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/kubeconfig: {Name:mk64e356b1eb31ef8982d6b594101aff5f90a6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:10:37.642655  365349 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 06:10:37.642680  365349 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:10:37.642919  365349 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:37.642951  365349 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 06:10:37.643039  365349 addons.go:70] Setting yakd=true in profile "addons-241520"
	I1210 06:10:37.643053  365349 addons.go:239] Setting addon yakd=true in "addons-241520"
	I1210 06:10:37.643074  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.643539  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.643984  365349 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-241520"
	I1210 06:10:37.644009  365349 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-241520"
	I1210 06:10:37.644025  365349 addons.go:70] Setting metrics-server=true in profile "addons-241520"
	I1210 06:10:37.644040  365349 addons.go:239] Setting addon metrics-server=true in "addons-241520"
	I1210 06:10:37.644035  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.644060  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.644474  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.644499  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.645075  365349 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-241520"
	I1210 06:10:37.645104  365349 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-241520"
	I1210 06:10:37.645132  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.645596  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.646961  365349 addons.go:70] Setting cloud-spanner=true in profile "addons-241520"
	I1210 06:10:37.646993  365349 addons.go:239] Setting addon cloud-spanner=true in "addons-241520"
	I1210 06:10:37.647036  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.647485  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.650015  365349 addons.go:70] Setting registry=true in profile "addons-241520"
	I1210 06:10:37.650053  365349 addons.go:239] Setting addon registry=true in "addons-241520"
	I1210 06:10:37.650095  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.650585  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.657448  365349 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-241520"
	I1210 06:10:37.657518  365349 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-241520"
	I1210 06:10:37.657551  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.658022  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.659296  365349 addons.go:70] Setting registry-creds=true in profile "addons-241520"
	I1210 06:10:37.659332  365349 addons.go:239] Setting addon registry-creds=true in "addons-241520"
	I1210 06:10:37.659381  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.659885  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.677862  365349 addons.go:70] Setting default-storageclass=true in profile "addons-241520"
	I1210 06:10:37.677884  365349 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-241520"
	I1210 06:10:37.677899  365349 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-241520"
	I1210 06:10:37.677907  365349 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-241520"
	I1210 06:10:37.678274  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.689461  365349 addons.go:70] Setting volcano=true in profile "addons-241520"
	I1210 06:10:37.689500  365349 addons.go:239] Setting addon volcano=true in "addons-241520"
	I1210 06:10:37.689542  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.677862  365349 addons.go:70] Setting storage-provisioner=true in profile "addons-241520"
	I1210 06:10:37.689824  365349 addons.go:239] Setting addon storage-provisioner=true in "addons-241520"
	I1210 06:10:37.689853  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.690060  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.690273  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.709527  365349 addons.go:70] Setting gcp-auth=true in profile "addons-241520"
	I1210 06:10:37.709566  365349 mustload.go:66] Loading cluster: addons-241520
	I1210 06:10:37.709766  365349 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:10:37.710023  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.717371  365349 addons.go:70] Setting volumesnapshots=true in profile "addons-241520"
	I1210 06:10:37.717407  365349 addons.go:239] Setting addon volumesnapshots=true in "addons-241520"
	I1210 06:10:37.717444  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.718367  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.725441  365349 addons.go:70] Setting ingress=true in profile "addons-241520"
	I1210 06:10:37.725475  365349 addons.go:239] Setting addon ingress=true in "addons-241520"
	I1210 06:10:37.725524  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.726004  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.743854  365349 addons.go:70] Setting ingress-dns=true in profile "addons-241520"
	I1210 06:10:37.743906  365349 addons.go:239] Setting addon ingress-dns=true in "addons-241520"
	I1210 06:10:37.743953  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.745451  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.763905  365349 out.go:179] * Verifying Kubernetes components...
	I1210 06:10:37.764546  365349 addons.go:70] Setting inspektor-gadget=true in profile "addons-241520"
	I1210 06:10:37.764583  365349 addons.go:239] Setting addon inspektor-gadget=true in "addons-241520"
	I1210 06:10:37.764623  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:37.765163  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.813535  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:37.882099  365349 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 06:10:37.893824  365349 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 06:10:37.893901  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 06:10:37.893986  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:37.937158  365349 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:10:37.942449  365349 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 06:10:37.949073  365349 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1210 06:10:37.949257  365349 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 06:10:37.949285  365349 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	W1210 06:10:37.951252  365349 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 06:10:37.952018  365349 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 06:10:37.952038  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 06:10:37.952107  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:37.971908  365349 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 06:10:37.974695  365349 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 06:10:37.974746  365349 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 06:10:37.974930  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.007859  365349 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 06:10:38.012326  365349 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 06:10:38.015332  365349 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 06:10:38.015359  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 06:10:38.015435  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.017666  365349 addons.go:239] Setting addon default-storageclass=true in "addons-241520"
	I1210 06:10:38.017719  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:38.018174  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:38.025739  365349 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-241520"
	I1210 06:10:38.025793  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:38.026257  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:38.038668  365349 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:10:38.038789  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:38.048532  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.055515  365349 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 06:10:38.055727  365349 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:10:38.055751  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:10:38.055828  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.074533  365349 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 06:10:38.081532  365349 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 06:10:38.081557  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 06:10:38.081629  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.091698  365349 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 06:10:38.097598  365349 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 06:10:38.097633  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 06:10:38.097706  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.100451  365349 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 06:10:38.100471  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 06:10:38.101142  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.128598  365349 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 06:10:38.131861  365349 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 06:10:38.134805  365349 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 06:10:38.141519  365349 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 06:10:38.155216  365349 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 06:10:38.155247  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 06:10:38.155311  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.168736  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 06:10:38.169242  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 06:10:38.171538  365349 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 06:10:38.171564  365349 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 06:10:38.171641  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.183411  365349 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 06:10:38.183434  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 06:10:38.183499  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.183735  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.186769  365349 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 06:10:38.187179  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.188804  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.208165  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 06:10:38.208408  365349 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 06:10:38.210242  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.253109  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 06:10:38.253406  365349 out.go:179]   - Using image docker.io/busybox:stable
	I1210 06:10:38.257784  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.272973  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 06:10:38.273255  365349 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 06:10:38.273313  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 06:10:38.273419  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.273826  365349 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:10:38.273838  365349 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:10:38.273883  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.299150  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 06:10:38.302234  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 06:10:38.307558  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 06:10:38.307734  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.309169  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.321361  365349 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 06:10:38.324313  365349 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 06:10:38.324381  365349 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 06:10:38.324476  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:38.339689  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.345510  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.347953  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.362056  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.369316  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.402911  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.403792  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.415159  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:38.517108  365349 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:10:39.124671  365349 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 06:10:39.124695  365349 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 06:10:39.155336  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 06:10:39.179860  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 06:10:39.222365  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 06:10:39.251814  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:10:39.351925  365349 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 06:10:39.351998  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 06:10:39.410223  365349 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 06:10:39.410301  365349 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 06:10:39.495067  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 06:10:39.512311  365349 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 06:10:39.512392  365349 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 06:10:39.522663  365349 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 06:10:39.522728  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 06:10:39.544082  365349 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 06:10:39.544156  365349 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 06:10:39.583379  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 06:10:39.591230  365349 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 06:10:39.591305  365349 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 06:10:39.598388  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 06:10:39.607792  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 06:10:39.609829  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:10:39.615254  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 06:10:39.758701  365349 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 06:10:39.758782  365349 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 06:10:39.786983  365349 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 06:10:39.787063  365349 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 06:10:39.789750  365349 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 06:10:39.789824  365349 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 06:10:39.873896  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 06:10:39.902165  365349 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 06:10:39.902240  365349 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 06:10:40.073041  365349 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 06:10:40.073157  365349 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 06:10:40.092189  365349 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 06:10:40.092274  365349 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 06:10:40.106009  365349 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 06:10:40.106040  365349 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 06:10:40.305628  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 06:10:40.335931  365349 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 06:10:40.336006  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 06:10:40.470830  365349 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 06:10:40.470904  365349 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 06:10:40.520189  365349 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 06:10:40.520269  365349 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 06:10:40.598607  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 06:10:40.808509  365349 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 06:10:40.808603  365349 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 06:10:40.814558  365349 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 06:10:40.814633  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 06:10:40.924102  365349 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 06:10:40.924177  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 06:10:41.015328  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 06:10:41.223537  365349 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 06:10:41.223615  365349 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 06:10:41.711331  365349 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 06:10:41.711405  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 06:10:41.742912  365349 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.225759413s)
	I1210 06:10:41.743777  365349 node_ready.go:35] waiting up to 6m0s for node "addons-241520" to be "Ready" ...
	I1210 06:10:41.744084  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.588672281s)
	I1210 06:10:41.744218  365349 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.557409573s)
	I1210 06:10:41.744256  365349 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1210 06:10:42.049923  365349 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 06:10:42.049993  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 06:10:42.253360  365349 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-241520" context rescaled to 1 replicas
	I1210 06:10:42.361083  365349 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 06:10:42.361171  365349 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 06:10:42.609558  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.357667857s)
	I1210 06:10:42.609887  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.114744908s)
	I1210 06:10:42.609957  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.387068834s)
	I1210 06:10:42.610029  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.430098764s)
	I1210 06:10:42.668366  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1210 06:10:43.774900  365349 node_ready.go:57] node "addons-241520" has "Ready":"False" status (will retry)
	I1210 06:10:44.824528  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.241059969s)
	I1210 06:10:45.674835  365349 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 06:10:45.674915  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:45.701892  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:45.824867  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.226391271s)
	I1210 06:10:45.824896  365349 addons.go:495] Verifying addon ingress=true in "addons-241520"
	I1210 06:10:45.825168  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.217300105s)
	I1210 06:10:45.825392  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.215494067s)
	I1210 06:10:45.825451  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.210124624s)
	I1210 06:10:45.825475  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.951507291s)
	I1210 06:10:45.825876  365349 addons.go:495] Verifying addon registry=true in "addons-241520"
	I1210 06:10:45.825552  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.519899856s)
	I1210 06:10:45.826347  365349 addons.go:495] Verifying addon metrics-server=true in "addons-241520"
	I1210 06:10:45.825580  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.226910556s)
	I1210 06:10:45.825648  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.810240776s)
	W1210 06:10:45.827421  365349 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 06:10:45.827448  365349 retry.go:31] will retry after 352.689924ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 06:10:45.828315  365349 out.go:179] * Verifying registry addon...
	I1210 06:10:45.828363  365349 out.go:179] * Verifying ingress addon...
	I1210 06:10:45.830466  365349 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-241520 service yakd-dashboard -n yakd-dashboard
	
	I1210 06:10:45.833386  365349 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 06:10:45.834397  365349 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1210 06:10:45.835384  365349 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 06:10:45.852095  365349 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 06:10:45.852117  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:45.852297  365349 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 06:10:45.852304  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:45.860524  365349 addons.go:239] Setting addon gcp-auth=true in "addons-241520"
	I1210 06:10:45.860625  365349 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:10:45.861180  365349 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:10:45.883483  365349 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 06:10:45.883539  365349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:10:45.903370  365349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:10:46.180664  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 06:10:46.203321  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.534843962s)
	I1210 06:10:46.203360  365349 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-241520"
	I1210 06:10:46.206707  365349 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 06:10:46.206713  365349 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 06:10:46.209697  365349 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 06:10:46.210471  365349 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 06:10:46.212742  365349 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 06:10:46.212777  365349 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 06:10:46.218830  365349 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 06:10:46.218850  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 06:10:46.249976  365349 node_ready.go:57] node "addons-241520" has "Ready":"False" status (will retry)
	I1210 06:10:46.262517  365349 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 06:10:46.262540  365349 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 06:10:46.280181  365349 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 06:10:46.280200  365349 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 06:10:46.304120  365349 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 06:10:46.339585  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:46.340081  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:46.715087  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:46.840287  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:46.840651  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:47.214168  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:47.336741  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:47.338387  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:47.713415  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:47.837934  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:47.838589  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:48.213910  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:48.336677  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:48.338077  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:48.720958  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 06:10:48.748311  365349 node_ready.go:57] node "addons-241520" has "Ready":"False" status (will retry)
	I1210 06:10:48.838462  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:48.839337  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:48.986925  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.806209274s)
	I1210 06:10:48.987010  365349 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.682821381s)
	I1210 06:10:48.990089  365349 addons.go:495] Verifying addon gcp-auth=true in "addons-241520"
	I1210 06:10:48.993152  365349 out.go:179] * Verifying gcp-auth addon...
	I1210 06:10:48.996827  365349 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 06:10:49.000048  365349 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 06:10:49.000080  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:49.214583  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:49.336606  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:49.338138  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:49.499832  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:49.713902  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:49.838633  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:49.839278  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:50.001697  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:50.214288  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:50.337898  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:50.338374  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:50.500392  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:50.714815  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:50.837548  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:50.837659  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:51.003065  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:51.214255  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 06:10:51.247104  365349 node_ready.go:57] node "addons-241520" has "Ready":"False" status (will retry)
	I1210 06:10:51.337136  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:51.337386  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:51.500357  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:51.713441  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:51.837216  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:51.837645  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:52.008518  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:52.213538  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:52.337303  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:52.337440  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:52.500922  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:52.714341  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:52.838370  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:52.838543  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:53.003150  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:53.214388  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 06:10:53.247274  365349 node_ready.go:57] node "addons-241520" has "Ready":"False" status (will retry)
	I1210 06:10:53.336974  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:53.337722  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:53.500790  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:53.713660  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:53.836809  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:53.837518  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:54.074036  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:54.223806  365349 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 06:10:54.223831  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:54.266275  365349 node_ready.go:49] node "addons-241520" is "Ready"
	I1210 06:10:54.266306  365349 node_ready.go:38] duration metric: took 12.52246546s for node "addons-241520" to be "Ready" ...
	I1210 06:10:54.266320  365349 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:10:54.266379  365349 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:10:54.288568  365349 api_server.go:72] duration metric: took 16.645858708s to wait for apiserver process to appear ...
	I1210 06:10:54.288596  365349 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:10:54.288616  365349 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1210 06:10:54.301862  365349 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1210 06:10:54.305242  365349 api_server.go:141] control plane version: v1.34.3
	I1210 06:10:54.305274  365349 api_server.go:131] duration metric: took 16.670374ms to wait for apiserver health ...
	I1210 06:10:54.305284  365349 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:10:54.373969  365349 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 06:10:54.373998  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:54.374479  365349 system_pods.go:59] 19 kube-system pods found
	I1210 06:10:54.374518  365349 system_pods.go:61] "coredns-66bc5c9577-ds7m5" [4ab8c833-9e84-4a9c-a5a3-00db1cac3a38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:10:54.374533  365349 system_pods.go:61] "csi-hostpath-attacher-0" [51883b17-568a-4e55-98af-0e05b0b82a8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 06:10:54.374544  365349 system_pods.go:61] "csi-hostpath-resizer-0" [1e3ff8b8-58b0-4a82-a161-424bf106360a] Pending
	I1210 06:10:54.374549  365349 system_pods.go:61] "csi-hostpathplugin-qf6mx" [b90691c5-fbe5-45ea-b6af-52ae35765477] Pending
	I1210 06:10:54.374554  365349 system_pods.go:61] "etcd-addons-241520" [267e51ba-ce7e-4d65-9431-6d8524f1085b] Running
	I1210 06:10:54.374558  365349 system_pods.go:61] "kindnet-h9tr4" [b06a5696-c044-44ec-b102-c34cbf0b480b] Running
	I1210 06:10:54.374562  365349 system_pods.go:61] "kube-apiserver-addons-241520" [6bead7dc-7534-41e5-a5cb-dfb46451b561] Running
	I1210 06:10:54.374569  365349 system_pods.go:61] "kube-controller-manager-addons-241520" [5a837375-b97c-40af-b465-7d1ea919beac] Running
	I1210 06:10:54.374573  365349 system_pods.go:61] "kube-ingress-dns-minikube" [7543ba21-d18b-4d73-9940-71a8dfb241c0] Pending
	I1210 06:10:54.374578  365349 system_pods.go:61] "kube-proxy-srgdx" [7a04de78-15a5-4a0f-b6f4-b7cef4acc511] Running
	I1210 06:10:54.374582  365349 system_pods.go:61] "kube-scheduler-addons-241520" [e176a48a-a983-4774-add2-87513e2748ee] Running
	I1210 06:10:54.374593  365349 system_pods.go:61] "metrics-server-85b7d694d7-rwcgk" [c0c02e94-6958-4eea-9eaa-a67bdb5a9e76] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:10:54.374597  365349 system_pods.go:61] "nvidia-device-plugin-daemonset-qbztj" [05046e52-8283-4767-9fb0-3ea35511d095] Pending
	I1210 06:10:54.374607  365349 system_pods.go:61] "registry-6b586f9694-jv6bp" [98fabc13-aaf3-44ca-bcbd-2817949dcb72] Pending
	I1210 06:10:54.374613  365349 system_pods.go:61] "registry-creds-764b6fb674-xsqwd" [bb826bb4-0c23-4177-9e45-94b0b2c61e46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 06:10:54.374617  365349 system_pods.go:61] "registry-proxy-pfbv5" [66beaf69-9d47-4b33-bba2-32ca2a101bcc] Pending
	I1210 06:10:54.374627  365349 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8pzhn" [a9c3f52c-7727-41a9-841c-c1fae4ed4636] Pending
	I1210 06:10:54.374631  365349 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qgnvq" [ba2995c6-cea0-497b-bd27-d8c59ebe811d] Pending
	I1210 06:10:54.374636  365349 system_pods.go:61] "storage-provisioner" [a864cb16-738a-4b0e-b0db-65485b264f6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:10:54.374643  365349 system_pods.go:74] duration metric: took 69.352988ms to wait for pod list to return data ...
	I1210 06:10:54.374656  365349 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:10:54.374936  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:54.500804  365349 default_sa.go:45] found service account: "default"
	I1210 06:10:54.500833  365349 default_sa.go:55] duration metric: took 126.170222ms for default service account to be created ...
	I1210 06:10:54.500844  365349 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:10:54.519999  365349 system_pods.go:86] 19 kube-system pods found
	I1210 06:10:54.520037  365349 system_pods.go:89] "coredns-66bc5c9577-ds7m5" [4ab8c833-9e84-4a9c-a5a3-00db1cac3a38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:10:54.520047  365349 system_pods.go:89] "csi-hostpath-attacher-0" [51883b17-568a-4e55-98af-0e05b0b82a8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 06:10:54.520052  365349 system_pods.go:89] "csi-hostpath-resizer-0" [1e3ff8b8-58b0-4a82-a161-424bf106360a] Pending
	I1210 06:10:54.520057  365349 system_pods.go:89] "csi-hostpathplugin-qf6mx" [b90691c5-fbe5-45ea-b6af-52ae35765477] Pending
	I1210 06:10:54.520061  365349 system_pods.go:89] "etcd-addons-241520" [267e51ba-ce7e-4d65-9431-6d8524f1085b] Running
	I1210 06:10:54.520065  365349 system_pods.go:89] "kindnet-h9tr4" [b06a5696-c044-44ec-b102-c34cbf0b480b] Running
	I1210 06:10:54.520069  365349 system_pods.go:89] "kube-apiserver-addons-241520" [6bead7dc-7534-41e5-a5cb-dfb46451b561] Running
	I1210 06:10:54.520078  365349 system_pods.go:89] "kube-controller-manager-addons-241520" [5a837375-b97c-40af-b465-7d1ea919beac] Running
	I1210 06:10:54.520082  365349 system_pods.go:89] "kube-ingress-dns-minikube" [7543ba21-d18b-4d73-9940-71a8dfb241c0] Pending
	I1210 06:10:54.520086  365349 system_pods.go:89] "kube-proxy-srgdx" [7a04de78-15a5-4a0f-b6f4-b7cef4acc511] Running
	I1210 06:10:54.520090  365349 system_pods.go:89] "kube-scheduler-addons-241520" [e176a48a-a983-4774-add2-87513e2748ee] Running
	I1210 06:10:54.520101  365349 system_pods.go:89] "metrics-server-85b7d694d7-rwcgk" [c0c02e94-6958-4eea-9eaa-a67bdb5a9e76] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:10:54.520105  365349 system_pods.go:89] "nvidia-device-plugin-daemonset-qbztj" [05046e52-8283-4767-9fb0-3ea35511d095] Pending
	I1210 06:10:54.520127  365349 system_pods.go:89] "registry-6b586f9694-jv6bp" [98fabc13-aaf3-44ca-bcbd-2817949dcb72] Pending
	I1210 06:10:54.520139  365349 system_pods.go:89] "registry-creds-764b6fb674-xsqwd" [bb826bb4-0c23-4177-9e45-94b0b2c61e46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 06:10:54.520144  365349 system_pods.go:89] "registry-proxy-pfbv5" [66beaf69-9d47-4b33-bba2-32ca2a101bcc] Pending
	I1210 06:10:54.520150  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8pzhn" [a9c3f52c-7727-41a9-841c-c1fae4ed4636] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:54.520160  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qgnvq" [ba2995c6-cea0-497b-bd27-d8c59ebe811d] Pending
	I1210 06:10:54.520165  365349 system_pods.go:89] "storage-provisioner" [a864cb16-738a-4b0e-b0db-65485b264f6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:10:54.520179  365349 retry.go:31] will retry after 244.689844ms: missing components: kube-dns
	I1210 06:10:54.535689  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:54.729897  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:54.784354  365349 system_pods.go:86] 19 kube-system pods found
	I1210 06:10:54.784398  365349 system_pods.go:89] "coredns-66bc5c9577-ds7m5" [4ab8c833-9e84-4a9c-a5a3-00db1cac3a38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:10:54.784408  365349 system_pods.go:89] "csi-hostpath-attacher-0" [51883b17-568a-4e55-98af-0e05b0b82a8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 06:10:54.784416  365349 system_pods.go:89] "csi-hostpath-resizer-0" [1e3ff8b8-58b0-4a82-a161-424bf106360a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 06:10:54.784424  365349 system_pods.go:89] "csi-hostpathplugin-qf6mx" [b90691c5-fbe5-45ea-b6af-52ae35765477] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 06:10:54.784433  365349 system_pods.go:89] "etcd-addons-241520" [267e51ba-ce7e-4d65-9431-6d8524f1085b] Running
	I1210 06:10:54.784438  365349 system_pods.go:89] "kindnet-h9tr4" [b06a5696-c044-44ec-b102-c34cbf0b480b] Running
	I1210 06:10:54.784447  365349 system_pods.go:89] "kube-apiserver-addons-241520" [6bead7dc-7534-41e5-a5cb-dfb46451b561] Running
	I1210 06:10:54.784451  365349 system_pods.go:89] "kube-controller-manager-addons-241520" [5a837375-b97c-40af-b465-7d1ea919beac] Running
	I1210 06:10:54.784460  365349 system_pods.go:89] "kube-ingress-dns-minikube" [7543ba21-d18b-4d73-9940-71a8dfb241c0] Pending
	I1210 06:10:54.784470  365349 system_pods.go:89] "kube-proxy-srgdx" [7a04de78-15a5-4a0f-b6f4-b7cef4acc511] Running
	I1210 06:10:54.784474  365349 system_pods.go:89] "kube-scheduler-addons-241520" [e176a48a-a983-4774-add2-87513e2748ee] Running
	I1210 06:10:54.784480  365349 system_pods.go:89] "metrics-server-85b7d694d7-rwcgk" [c0c02e94-6958-4eea-9eaa-a67bdb5a9e76] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:10:54.784486  365349 system_pods.go:89] "nvidia-device-plugin-daemonset-qbztj" [05046e52-8283-4767-9fb0-3ea35511d095] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 06:10:54.784494  365349 system_pods.go:89] "registry-6b586f9694-jv6bp" [98fabc13-aaf3-44ca-bcbd-2817949dcb72] Pending
	I1210 06:10:54.784501  365349 system_pods.go:89] "registry-creds-764b6fb674-xsqwd" [bb826bb4-0c23-4177-9e45-94b0b2c61e46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 06:10:54.784504  365349 system_pods.go:89] "registry-proxy-pfbv5" [66beaf69-9d47-4b33-bba2-32ca2a101bcc] Pending
	I1210 06:10:54.784512  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8pzhn" [a9c3f52c-7727-41a9-841c-c1fae4ed4636] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:54.784519  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qgnvq" [ba2995c6-cea0-497b-bd27-d8c59ebe811d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:54.784526  365349 system_pods.go:89] "storage-provisioner" [a864cb16-738a-4b0e-b0db-65485b264f6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:10:54.784542  365349 retry.go:31] will retry after 387.791714ms: missing components: kube-dns
	I1210 06:10:54.844051  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:54.844372  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:55.003664  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:55.179465  365349 system_pods.go:86] 19 kube-system pods found
	I1210 06:10:55.179509  365349 system_pods.go:89] "coredns-66bc5c9577-ds7m5" [4ab8c833-9e84-4a9c-a5a3-00db1cac3a38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:10:55.179529  365349 system_pods.go:89] "csi-hostpath-attacher-0" [51883b17-568a-4e55-98af-0e05b0b82a8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 06:10:55.179537  365349 system_pods.go:89] "csi-hostpath-resizer-0" [1e3ff8b8-58b0-4a82-a161-424bf106360a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 06:10:55.179544  365349 system_pods.go:89] "csi-hostpathplugin-qf6mx" [b90691c5-fbe5-45ea-b6af-52ae35765477] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 06:10:55.179552  365349 system_pods.go:89] "etcd-addons-241520" [267e51ba-ce7e-4d65-9431-6d8524f1085b] Running
	I1210 06:10:55.179557  365349 system_pods.go:89] "kindnet-h9tr4" [b06a5696-c044-44ec-b102-c34cbf0b480b] Running
	I1210 06:10:55.179567  365349 system_pods.go:89] "kube-apiserver-addons-241520" [6bead7dc-7534-41e5-a5cb-dfb46451b561] Running
	I1210 06:10:55.179572  365349 system_pods.go:89] "kube-controller-manager-addons-241520" [5a837375-b97c-40af-b465-7d1ea919beac] Running
	I1210 06:10:55.179579  365349 system_pods.go:89] "kube-ingress-dns-minikube" [7543ba21-d18b-4d73-9940-71a8dfb241c0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 06:10:55.179590  365349 system_pods.go:89] "kube-proxy-srgdx" [7a04de78-15a5-4a0f-b6f4-b7cef4acc511] Running
	I1210 06:10:55.179594  365349 system_pods.go:89] "kube-scheduler-addons-241520" [e176a48a-a983-4774-add2-87513e2748ee] Running
	I1210 06:10:55.179600  365349 system_pods.go:89] "metrics-server-85b7d694d7-rwcgk" [c0c02e94-6958-4eea-9eaa-a67bdb5a9e76] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:10:55.179606  365349 system_pods.go:89] "nvidia-device-plugin-daemonset-qbztj" [05046e52-8283-4767-9fb0-3ea35511d095] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 06:10:55.179617  365349 system_pods.go:89] "registry-6b586f9694-jv6bp" [98fabc13-aaf3-44ca-bcbd-2817949dcb72] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 06:10:55.179623  365349 system_pods.go:89] "registry-creds-764b6fb674-xsqwd" [bb826bb4-0c23-4177-9e45-94b0b2c61e46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 06:10:55.179628  365349 system_pods.go:89] "registry-proxy-pfbv5" [66beaf69-9d47-4b33-bba2-32ca2a101bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 06:10:55.179634  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8pzhn" [a9c3f52c-7727-41a9-841c-c1fae4ed4636] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:55.179643  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qgnvq" [ba2995c6-cea0-497b-bd27-d8c59ebe811d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:55.179651  365349 system_pods.go:89] "storage-provisioner" [a864cb16-738a-4b0e-b0db-65485b264f6c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:10:55.179670  365349 retry.go:31] will retry after 394.295586ms: missing components: kube-dns
	I1210 06:10:55.213911  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:55.339217  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:55.339602  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:55.500114  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:55.583642  365349 system_pods.go:86] 19 kube-system pods found
	I1210 06:10:55.583678  365349 system_pods.go:89] "coredns-66bc5c9577-ds7m5" [4ab8c833-9e84-4a9c-a5a3-00db1cac3a38] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:10:55.583688  365349 system_pods.go:89] "csi-hostpath-attacher-0" [51883b17-568a-4e55-98af-0e05b0b82a8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 06:10:55.583696  365349 system_pods.go:89] "csi-hostpath-resizer-0" [1e3ff8b8-58b0-4a82-a161-424bf106360a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 06:10:55.583703  365349 system_pods.go:89] "csi-hostpathplugin-qf6mx" [b90691c5-fbe5-45ea-b6af-52ae35765477] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 06:10:55.583708  365349 system_pods.go:89] "etcd-addons-241520" [267e51ba-ce7e-4d65-9431-6d8524f1085b] Running
	I1210 06:10:55.583714  365349 system_pods.go:89] "kindnet-h9tr4" [b06a5696-c044-44ec-b102-c34cbf0b480b] Running
	I1210 06:10:55.583719  365349 system_pods.go:89] "kube-apiserver-addons-241520" [6bead7dc-7534-41e5-a5cb-dfb46451b561] Running
	I1210 06:10:55.583723  365349 system_pods.go:89] "kube-controller-manager-addons-241520" [5a837375-b97c-40af-b465-7d1ea919beac] Running
	I1210 06:10:55.583729  365349 system_pods.go:89] "kube-ingress-dns-minikube" [7543ba21-d18b-4d73-9940-71a8dfb241c0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 06:10:55.583733  365349 system_pods.go:89] "kube-proxy-srgdx" [7a04de78-15a5-4a0f-b6f4-b7cef4acc511] Running
	I1210 06:10:55.583737  365349 system_pods.go:89] "kube-scheduler-addons-241520" [e176a48a-a983-4774-add2-87513e2748ee] Running
	I1210 06:10:55.583743  365349 system_pods.go:89] "metrics-server-85b7d694d7-rwcgk" [c0c02e94-6958-4eea-9eaa-a67bdb5a9e76] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:10:55.583749  365349 system_pods.go:89] "nvidia-device-plugin-daemonset-qbztj" [05046e52-8283-4767-9fb0-3ea35511d095] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 06:10:55.583754  365349 system_pods.go:89] "registry-6b586f9694-jv6bp" [98fabc13-aaf3-44ca-bcbd-2817949dcb72] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 06:10:55.583763  365349 system_pods.go:89] "registry-creds-764b6fb674-xsqwd" [bb826bb4-0c23-4177-9e45-94b0b2c61e46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 06:10:55.583768  365349 system_pods.go:89] "registry-proxy-pfbv5" [66beaf69-9d47-4b33-bba2-32ca2a101bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 06:10:55.583775  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8pzhn" [a9c3f52c-7727-41a9-841c-c1fae4ed4636] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:55.583782  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qgnvq" [ba2995c6-cea0-497b-bd27-d8c59ebe811d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:55.583791  365349 system_pods.go:89] "storage-provisioner" [a864cb16-738a-4b0e-b0db-65485b264f6c] Running
	I1210 06:10:55.583806  365349 retry.go:31] will retry after 561.743673ms: missing components: kube-dns
	I1210 06:10:55.714900  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:55.839523  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:55.840052  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:56.000411  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:56.151027  365349 system_pods.go:86] 19 kube-system pods found
	I1210 06:10:56.151108  365349 system_pods.go:89] "coredns-66bc5c9577-ds7m5" [4ab8c833-9e84-4a9c-a5a3-00db1cac3a38] Running
	I1210 06:10:56.151136  365349 system_pods.go:89] "csi-hostpath-attacher-0" [51883b17-568a-4e55-98af-0e05b0b82a8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 06:10:56.151162  365349 system_pods.go:89] "csi-hostpath-resizer-0" [1e3ff8b8-58b0-4a82-a161-424bf106360a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 06:10:56.151207  365349 system_pods.go:89] "csi-hostpathplugin-qf6mx" [b90691c5-fbe5-45ea-b6af-52ae35765477] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 06:10:56.151229  365349 system_pods.go:89] "etcd-addons-241520" [267e51ba-ce7e-4d65-9431-6d8524f1085b] Running
	I1210 06:10:56.151252  365349 system_pods.go:89] "kindnet-h9tr4" [b06a5696-c044-44ec-b102-c34cbf0b480b] Running
	I1210 06:10:56.151284  365349 system_pods.go:89] "kube-apiserver-addons-241520" [6bead7dc-7534-41e5-a5cb-dfb46451b561] Running
	I1210 06:10:56.151310  365349 system_pods.go:89] "kube-controller-manager-addons-241520" [5a837375-b97c-40af-b465-7d1ea919beac] Running
	I1210 06:10:56.151337  365349 system_pods.go:89] "kube-ingress-dns-minikube" [7543ba21-d18b-4d73-9940-71a8dfb241c0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 06:10:56.151387  365349 system_pods.go:89] "kube-proxy-srgdx" [7a04de78-15a5-4a0f-b6f4-b7cef4acc511] Running
	I1210 06:10:56.151412  365349 system_pods.go:89] "kube-scheduler-addons-241520" [e176a48a-a983-4774-add2-87513e2748ee] Running
	I1210 06:10:56.151432  365349 system_pods.go:89] "metrics-server-85b7d694d7-rwcgk" [c0c02e94-6958-4eea-9eaa-a67bdb5a9e76] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 06:10:56.151454  365349 system_pods.go:89] "nvidia-device-plugin-daemonset-qbztj" [05046e52-8283-4767-9fb0-3ea35511d095] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 06:10:56.151487  365349 system_pods.go:89] "registry-6b586f9694-jv6bp" [98fabc13-aaf3-44ca-bcbd-2817949dcb72] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 06:10:56.151512  365349 system_pods.go:89] "registry-creds-764b6fb674-xsqwd" [bb826bb4-0c23-4177-9e45-94b0b2c61e46] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 06:10:56.151537  365349 system_pods.go:89] "registry-proxy-pfbv5" [66beaf69-9d47-4b33-bba2-32ca2a101bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 06:10:56.151561  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8pzhn" [a9c3f52c-7727-41a9-841c-c1fae4ed4636] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:56.151594  365349 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qgnvq" [ba2995c6-cea0-497b-bd27-d8c59ebe811d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 06:10:56.151620  365349 system_pods.go:89] "storage-provisioner" [a864cb16-738a-4b0e-b0db-65485b264f6c] Running
	I1210 06:10:56.151648  365349 system_pods.go:126] duration metric: took 1.650797257s to wait for k8s-apps to be running ...
	I1210 06:10:56.151672  365349 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:10:56.151761  365349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:10:56.173744  365349 system_svc.go:56] duration metric: took 22.063143ms WaitForService to wait for kubelet
	I1210 06:10:56.173775  365349 kubeadm.go:587] duration metric: took 18.531070355s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:10:56.173792  365349 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:10:56.176892  365349 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1210 06:10:56.176925  365349 node_conditions.go:123] node cpu capacity is 2
	I1210 06:10:56.176940  365349 node_conditions.go:105] duration metric: took 3.142529ms to run NodePressure ...
	I1210 06:10:56.176953  365349 start.go:242] waiting for startup goroutines ...
	I1210 06:10:56.214154  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:56.340049  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:56.340184  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:56.500615  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:56.714221  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:56.838995  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:56.839448  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:57.001646  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:57.214355  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:57.337876  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:57.339766  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:57.500028  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:57.730081  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:57.837849  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:57.838007  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:58.002580  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:58.215120  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:58.337953  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:58.338105  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:58.500000  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:58.714645  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:58.839451  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:58.839797  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:59.002035  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:59.218119  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:59.338243  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:10:59.338494  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:59.500425  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:10:59.719097  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:10:59.837976  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:10:59.838688  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:00.017871  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:00.242280  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:00.355927  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:00.368691  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:00.500621  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:00.715788  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:00.839484  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:00.839740  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:01.001745  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:01.214078  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:01.339024  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:01.339319  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:01.500353  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:01.715089  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:01.838540  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:01.839039  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:02.001390  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:02.214851  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:02.338317  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:02.340447  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:02.500668  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:02.714704  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:02.839672  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:02.840071  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:03.007211  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:03.215979  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:03.339019  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:03.339536  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:03.508369  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:03.714805  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:03.842923  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:03.843789  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:04.003188  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:04.216672  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:04.372149  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:04.372681  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:04.519104  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:04.715013  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:04.845935  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:04.846249  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:05.001365  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:05.213861  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:05.337737  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:05.338996  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:05.500470  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:05.714514  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:05.838169  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:05.838887  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:06.000735  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:06.214317  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:06.340486  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:06.340976  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:06.502519  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:06.714433  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:06.839883  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:06.843709  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:07.001418  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:07.215647  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:07.338439  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:07.338554  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:07.500680  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:07.722174  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:07.839559  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:07.839801  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:08.002659  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:08.214462  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:08.337832  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:08.338323  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:08.500487  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:08.714457  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:08.840399  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:08.840493  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:09.002014  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:09.214416  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:09.337649  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:09.337950  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:09.499777  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:09.730317  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:09.838773  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:09.840809  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:10.024853  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:10.214499  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:10.338923  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:10.339163  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:10.499767  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:10.714146  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:10.862511  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:10.862758  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:11.001412  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:11.214159  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:11.339510  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:11.340754  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:11.500746  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:11.719286  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:11.838965  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:11.838970  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:12.003828  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:12.214544  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:12.338067  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:12.338246  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:12.500868  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:12.714924  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:12.840328  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:12.840779  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:13.000859  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:13.215536  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:13.338305  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:13.338485  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:13.501124  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:13.714577  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:13.838665  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:13.839755  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:14.001659  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:14.214727  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:14.339032  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:14.339420  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:14.500839  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:14.715317  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:14.838863  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:14.839050  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:15.001268  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:15.214751  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:15.337067  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:15.339762  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:15.500013  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:15.714406  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:15.836545  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:15.838741  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:16.001738  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:16.214115  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:16.338064  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:16.339343  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:16.502402  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:16.713845  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:16.838428  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:16.838611  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:17.002052  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:17.214856  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:17.341035  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:17.341639  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:17.500473  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:17.713910  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:17.837441  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:17.838163  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:18.008103  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:18.213513  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:18.339532  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:18.339740  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:18.501001  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:18.714701  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:18.847306  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:18.848557  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:19.002212  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:19.214595  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:19.336718  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:19.339337  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:19.500882  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:19.714163  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:19.840479  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:19.840557  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:20.007360  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:20.215117  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:20.338143  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:20.338871  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:20.499899  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:20.715042  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:20.839180  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:20.839715  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:21.007217  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:21.215087  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:21.337688  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:21.337863  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:21.499751  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:21.714079  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:21.839188  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 06:11:21.839407  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:22.001840  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:22.214666  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:22.340400  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:22.340823  365349 kapi.go:107] duration metric: took 36.507441437s to wait for kubernetes.io/minikube-addons=registry ...
	I1210 06:11:22.500130  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:22.714261  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:22.838127  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:23.002011  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:23.221771  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:23.337692  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:23.501055  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:23.715018  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:23.838931  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:24.001154  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:24.215099  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:24.338469  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:24.500670  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:24.714687  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:24.838466  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:25.002400  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:25.214019  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:25.338592  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:25.501034  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:25.714195  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:25.838393  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:26.003500  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:26.214132  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:26.339186  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:26.500650  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:26.716272  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:26.838056  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:27.008434  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:27.215344  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:27.348563  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:27.503194  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:27.714420  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:27.838441  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:28.001916  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:28.214778  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:28.337882  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:28.499991  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:28.714033  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:28.838037  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:29.000404  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:29.214808  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:29.338473  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:29.501262  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:29.715165  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:29.838914  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:30.000940  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:30.215906  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:30.339440  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:30.501003  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:30.722333  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:30.837688  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:31.006302  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:31.214087  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:31.339855  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:31.500910  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:31.714517  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:31.838011  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:32.000654  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:32.213833  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:32.343639  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:32.500704  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:32.714288  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:32.838353  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:33.002356  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:33.214485  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:33.337916  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:33.500988  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:33.714779  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:33.838247  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:34.001686  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:34.213740  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:34.338325  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:34.500441  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:34.715168  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:34.838670  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:35.018030  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:35.215555  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:35.337493  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:35.500794  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:35.714099  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:35.838391  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:36.001565  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:36.214736  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:36.338312  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:36.500273  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:36.714050  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:36.838668  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:37.000821  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:37.215778  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:37.338615  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:37.500846  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:37.714305  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:37.838276  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:38.001152  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:38.214014  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:38.338290  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:38.500316  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:38.714330  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:38.841143  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:39.003214  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:39.218919  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:39.338205  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:39.502296  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:39.716077  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:39.838596  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:40.006691  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:40.223579  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:40.337747  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:40.501320  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:40.729697  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:40.838406  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:41.003487  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:41.214773  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:41.346630  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:41.500233  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:41.714202  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:41.839120  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:42.001178  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:42.226380  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:42.361477  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:42.500646  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:42.716106  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:42.838295  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:43.001176  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:43.214698  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:43.337940  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:43.499954  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:43.714679  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:43.838239  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:44.001048  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:44.214674  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:44.338054  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:44.500177  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:44.714696  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:44.838687  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:45.001665  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:45.234637  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:45.341275  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:45.500582  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:45.714335  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:45.837769  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:46.000189  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:46.216939  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:46.338229  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:46.500285  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:46.714883  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:46.837886  365349 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 06:11:47.017594  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:47.214232  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:47.338326  365349 kapi.go:107] duration metric: took 1m1.503927338s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 06:11:47.576367  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:47.715045  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:48.002007  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:48.214898  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:48.500129  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:48.714413  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:49.002324  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:49.215852  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:49.500436  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:49.714941  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:50.000968  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 06:11:50.215152  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:50.500608  365349 kapi.go:107] duration metric: took 1m1.503782741s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 06:11:50.503876  365349 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-241520 cluster.
	I1210 06:11:50.506859  365349 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 06:11:50.509668  365349 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 06:11:50.715139  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:51.215334  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:51.713822  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:52.235382  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:52.717702  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:53.214644  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:53.714725  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:54.214311  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:54.714739  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:55.218472  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:55.714552  365349 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 06:11:56.214757  365349 kapi.go:107] duration metric: took 1m10.004284177s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 06:11:56.217977  365349 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, default-storageclass, inspektor-gadget, ingress-dns, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1210 06:11:56.221114  365349 addons.go:530] duration metric: took 1m18.578149743s for enable addons: enabled=[registry-creds amd-gpu-device-plugin nvidia-device-plugin cloud-spanner default-storageclass inspektor-gadget ingress-dns storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1210 06:11:56.221180  365349 start.go:247] waiting for cluster config update ...
	I1210 06:11:56.221242  365349 start.go:256] writing updated cluster config ...
	I1210 06:11:56.221570  365349 ssh_runner.go:195] Run: rm -f paused
	I1210 06:11:56.226619  365349 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:11:56.230833  365349 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ds7m5" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.237489  365349 pod_ready.go:94] pod "coredns-66bc5c9577-ds7m5" is "Ready"
	I1210 06:11:56.237520  365349 pod_ready.go:86] duration metric: took 6.655308ms for pod "coredns-66bc5c9577-ds7m5" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.239836  365349 pod_ready.go:83] waiting for pod "etcd-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.245140  365349 pod_ready.go:94] pod "etcd-addons-241520" is "Ready"
	I1210 06:11:56.245168  365349 pod_ready.go:86] duration metric: took 5.251314ms for pod "etcd-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.247745  365349 pod_ready.go:83] waiting for pod "kube-apiserver-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.252527  365349 pod_ready.go:94] pod "kube-apiserver-addons-241520" is "Ready"
	I1210 06:11:56.252557  365349 pod_ready.go:86] duration metric: took 4.785878ms for pod "kube-apiserver-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.255380  365349 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.631193  365349 pod_ready.go:94] pod "kube-controller-manager-addons-241520" is "Ready"
	I1210 06:11:56.631228  365349 pod_ready.go:86] duration metric: took 375.824764ms for pod "kube-controller-manager-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:56.830389  365349 pod_ready.go:83] waiting for pod "kube-proxy-srgdx" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:57.230999  365349 pod_ready.go:94] pod "kube-proxy-srgdx" is "Ready"
	I1210 06:11:57.231069  365349 pod_ready.go:86] duration metric: took 400.650182ms for pod "kube-proxy-srgdx" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:57.430658  365349 pod_ready.go:83] waiting for pod "kube-scheduler-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:57.830850  365349 pod_ready.go:94] pod "kube-scheduler-addons-241520" is "Ready"
	I1210 06:11:57.830878  365349 pod_ready.go:86] duration metric: took 400.148657ms for pod "kube-scheduler-addons-241520" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:11:57.830892  365349 pod_ready.go:40] duration metric: took 1.604237464s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:11:57.886532  365349 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1210 06:11:57.890225  365349 out.go:179] * Done! kubectl is now configured to use "addons-241520" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 06:11:55 addons-241520 crio[829]: time="2025-12-10T06:11:55.61790458Z" level=info msg="Created container e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea: kube-system/csi-hostpathplugin-qf6mx/csi-snapshotter" id=75303b47-12f5-47bb-ac75-2520677974df name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:11:55 addons-241520 crio[829]: time="2025-12-10T06:11:55.620364404Z" level=info msg="Starting container: e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea" id=0294952c-a023-4b44-bb59-d3c8f8acda0b name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:11:55 addons-241520 crio[829]: time="2025-12-10T06:11:55.623181945Z" level=info msg="Started container" PID=5794 containerID=e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea description=kube-system/csi-hostpathplugin-qf6mx/csi-snapshotter id=0294952c-a023-4b44-bb59-d3c8f8acda0b name=/runtime.v1.RuntimeService/StartContainer sandboxID=f4f06b7d19effc1e62c882cfdb55b1808fa5a75e2e0d4829efc1973dc5e2081d
	Dec 10 06:11:59 addons-241520 crio[829]: time="2025-12-10T06:11:59.004411474Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c3dc38a8-5ee2-4635-af6f-67e30fc05980 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:11:59 addons-241520 crio[829]: time="2025-12-10T06:11:59.004506253Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:11:59 addons-241520 crio[829]: time="2025-12-10T06:11:59.011290466Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1658f14833f781c3ec7bfeaf8cbba39fc25d73cefbf353e350023c23c2dbcfe2 UID:3ccff718-6015-45fc-bd06-1b60258f39ae NetNS:/var/run/netns/33ea9bd4-63a8-4371-844b-19e9f44b9dc2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40017669d0}] Aliases:map[]}"
	Dec 10 06:11:59 addons-241520 crio[829]: time="2025-12-10T06:11:59.011479901Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 06:11:59 addons-241520 crio[829]: time="2025-12-10T06:11:59.028717418Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1658f14833f781c3ec7bfeaf8cbba39fc25d73cefbf353e350023c23c2dbcfe2 UID:3ccff718-6015-45fc-bd06-1b60258f39ae NetNS:/var/run/netns/33ea9bd4-63a8-4371-844b-19e9f44b9dc2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40017669d0}] Aliases:map[]}"
	Dec 10 06:11:59 addons-241520 crio[829]: time="2025-12-10T06:11:59.02902706Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 06:11:59 addons-241520 crio[829]: time="2025-12-10T06:11:59.034364914Z" level=info msg="Ran pod sandbox 1658f14833f781c3ec7bfeaf8cbba39fc25d73cefbf353e350023c23c2dbcfe2 with infra container: default/busybox/POD" id=c3dc38a8-5ee2-4635-af6f-67e30fc05980 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:11:59 addons-241520 crio[829]: time="2025-12-10T06:11:59.035728695Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e961d53b-68d0-4fb8-978f-bd553dd7eb0c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:11:59 addons-241520 crio[829]: time="2025-12-10T06:11:59.03590135Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e961d53b-68d0-4fb8-978f-bd553dd7eb0c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:11:59 addons-241520 crio[829]: time="2025-12-10T06:11:59.035951566Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e961d53b-68d0-4fb8-978f-bd553dd7eb0c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:11:59 addons-241520 crio[829]: time="2025-12-10T06:11:59.036721327Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7997f510-57ff-4eff-932e-ab23fd1d3533 name=/runtime.v1.ImageService/PullImage
	Dec 10 06:11:59 addons-241520 crio[829]: time="2025-12-10T06:11:59.038939621Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 06:12:01 addons-241520 crio[829]: time="2025-12-10T06:12:01.209724589Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=7997f510-57ff-4eff-932e-ab23fd1d3533 name=/runtime.v1.ImageService/PullImage
	Dec 10 06:12:01 addons-241520 crio[829]: time="2025-12-10T06:12:01.210672625Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bd4f4e92-7ef1-49a1-9a29-fcff613cbebf name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:12:01 addons-241520 crio[829]: time="2025-12-10T06:12:01.213500792Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5ebf5ebe-01b7-4ba1-a2ab-6bdfa1ffc413 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:12:01 addons-241520 crio[829]: time="2025-12-10T06:12:01.220506317Z" level=info msg="Creating container: default/busybox/busybox" id=8e00fcd4-ce4f-4767-82f0-8c42583ccb53 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:12:01 addons-241520 crio[829]: time="2025-12-10T06:12:01.220660551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:12:01 addons-241520 crio[829]: time="2025-12-10T06:12:01.227839634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:12:01 addons-241520 crio[829]: time="2025-12-10T06:12:01.228518809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:12:01 addons-241520 crio[829]: time="2025-12-10T06:12:01.245484608Z" level=info msg="Created container ed0a8cabffe49a6bf4277ccc2fbcf09e858deffc7d5d796d74e69e8685fd9807: default/busybox/busybox" id=8e00fcd4-ce4f-4767-82f0-8c42583ccb53 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:12:01 addons-241520 crio[829]: time="2025-12-10T06:12:01.247469413Z" level=info msg="Starting container: ed0a8cabffe49a6bf4277ccc2fbcf09e858deffc7d5d796d74e69e8685fd9807" id=5d0e89e2-b9db-418e-ad42-afab38062e4a name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:12:01 addons-241520 crio[829]: time="2025-12-10T06:12:01.253349602Z" level=info msg="Started container" PID=5893 containerID=ed0a8cabffe49a6bf4277ccc2fbcf09e858deffc7d5d796d74e69e8685fd9807 description=default/busybox/busybox id=5d0e89e2-b9db-418e-ad42-afab38062e4a name=/runtime.v1.RuntimeService/StartContainer sandboxID=1658f14833f781c3ec7bfeaf8cbba39fc25d73cefbf353e350023c23c2dbcfe2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	ed0a8cabffe49       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          9 seconds ago        Running             busybox                                  0                   1658f14833f78       busybox                                     default
	e3e105248b47d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          15 seconds ago       Running             csi-snapshotter                          0                   f4f06b7d19eff       csi-hostpathplugin-qf6mx                    kube-system
	15f67f2cc14b2       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          17 seconds ago       Running             csi-provisioner                          0                   f4f06b7d19eff       csi-hostpathplugin-qf6mx                    kube-system
	706727cbc03fa       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            18 seconds ago       Running             liveness-probe                           0                   f4f06b7d19eff       csi-hostpathplugin-qf6mx                    kube-system
	32f373b06842f       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           19 seconds ago       Running             hostpath                                 0                   f4f06b7d19eff       csi-hostpathplugin-qf6mx                    kube-system
	cc2087138549b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                20 seconds ago       Running             node-driver-registrar                    0                   f4f06b7d19eff       csi-hostpathplugin-qf6mx                    kube-system
	791a0461acaf4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 22 seconds ago       Running             gcp-auth                                 0                   ac1f27e6371ab       gcp-auth-78565c9fb4-744nw                   gcp-auth
	62130e3244ed5       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             25 seconds ago       Running             controller                               0                   da52382fd2f26       ingress-nginx-controller-85d4c799dd-kqczr   ingress-nginx
	e832e618b9556       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            32 seconds ago       Running             gadget                                   0                   494ccea905237       gadget-2srh4                                gadget
	631ab57806ae3       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             36 seconds ago       Running             local-path-provisioner                   0                   e4bb112c099eb       local-path-provisioner-648f6765c9-kvsb5     local-path-storage
	da7b3d50307f0       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              37 seconds ago       Running             yakd                                     0                   3365eb19c69d7       yakd-dashboard-5ff678cb9-v86q9              yakd-dashboard
	3edcc847365a8       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               41 seconds ago       Running             minikube-ingress-dns                     0                   ed5914fb64905       kube-ingress-dns-minikube                   kube-system
	e9f72b624d9a0       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              50 seconds ago       Running             registry-proxy                           0                   374978dee187c       registry-proxy-pfbv5                        kube-system
	0043b9d98a397       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             52 seconds ago       Exited              patch                                    2                   023e1f095d7fa       gcp-auth-certs-patch-dz25s                  gcp-auth
	b3d13279bb1f9       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   53 seconds ago       Running             csi-external-health-monitor-controller   0                   f4f06b7d19eff       csi-hostpathplugin-qf6mx                    kube-system
	d5d967fa674ce       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             54 seconds ago       Exited              patch                                    2                   69e58268f189f       ingress-nginx-admission-patch-pvxz6         ingress-nginx
	b11ba380657e9       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     55 seconds ago       Running             nvidia-device-plugin-ctr                 0                   6a7da179fadeb       nvidia-device-plugin-daemonset-qbztj        kube-system
	7c4c997d687b5       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      59 seconds ago       Running             volume-snapshot-controller               0                   a4aa3a7fa29d7       snapshot-controller-7d9fbc56b8-qgnvq        kube-system
	7cf2b1b068ab5       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           59 seconds ago       Running             registry                                 0                   f711387c879a4       registry-6b586f9694-jv6bp                   kube-system
	ec50b512c8e8c       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   1a0ae0a08be91       csi-hostpath-resizer-0                      kube-system
	162a35253bd64       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              create                                   0                   03614e0787f5f       gcp-auth-certs-create-xzr87                 gcp-auth
	3e78fb6d659d3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              create                                   0                   fc125abb512bd       ingress-nginx-admission-create-hzj5c        ingress-nginx
	586ddad1ca64c       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago   Running             cloud-spanner-emulator                   0                   c4c48253d25db       cloud-spanner-emulator-5bdddb765-fb462      default
	fcb9b12f636ff       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   e67386f467ee0       metrics-server-85b7d694d7-rwcgk             kube-system
	5bf1539b0ce43       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   44077c2eb672f       snapshot-controller-7d9fbc56b8-8pzhn        kube-system
	ccf62bd56b5d1       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   70166cf12cba7       csi-hostpath-attacher-0                     kube-system
	c310e24a2efb9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   294cad811cc34       coredns-66bc5c9577-ds7m5                    kube-system
	75a53210a6a83       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                                                             About a minute ago   Running             storage-provisioner                      0                   f099c08caac5c       storage-provisioner                         kube-system
	c0e0a1b2a34ab       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1                                           About a minute ago   Running             kindnet-cni                              0                   b9d924906a5e3       kindnet-h9tr4                               kube-system
	ba463a83af075       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                                                             About a minute ago   Running             kube-proxy                               0                   be28fb6b5d96a       kube-proxy-srgdx                            kube-system
	a8f3303a4f28e       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                                                             About a minute ago   Running             kube-scheduler                           0                   311c46c40e39c       kube-scheduler-addons-241520                kube-system
	a33aa3e9cb946       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                                                             About a minute ago   Running             kube-apiserver                           0                   9868d4b159432       kube-apiserver-addons-241520                kube-system
	ebaecf86934b5       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                                                             About a minute ago   Running             kube-controller-manager                  0                   cf61a41864df6       kube-controller-manager-addons-241520       kube-system
	88969ee781c52       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             About a minute ago   Running             etcd                                     0                   a058142f24f09       etcd-addons-241520                          kube-system
	
	
	==> coredns [c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749] <==
	[INFO] 10.244.0.12:40772 - 23729 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000098587s
	[INFO] 10.244.0.12:40772 - 50101 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003791623s
	[INFO] 10.244.0.12:40772 - 53967 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.005175696s
	[INFO] 10.244.0.12:40772 - 35629 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000195556s
	[INFO] 10.244.0.12:40772 - 48678 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000128117s
	[INFO] 10.244.0.12:47976 - 56897 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000236156s
	[INFO] 10.244.0.12:47976 - 56660 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000112322s
	[INFO] 10.244.0.12:34355 - 10905 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000102115s
	[INFO] 10.244.0.12:34355 - 11151 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00016024s
	[INFO] 10.244.0.12:51487 - 32619 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000118615s
	[INFO] 10.244.0.12:51487 - 32414 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000173558s
	[INFO] 10.244.0.12:52484 - 48828 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001479607s
	[INFO] 10.244.0.12:52484 - 48384 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001462991s
	[INFO] 10.244.0.12:33946 - 16628 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000123678s
	[INFO] 10.244.0.12:33946 - 16216 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157147s
	[INFO] 10.244.0.21:38503 - 62804 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000174296s
	[INFO] 10.244.0.21:53061 - 64971 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000514954s
	[INFO] 10.244.0.21:48327 - 59433 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000160463s
	[INFO] 10.244.0.21:35758 - 16802 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000231389s
	[INFO] 10.244.0.21:56658 - 17051 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124465s
	[INFO] 10.244.0.21:47054 - 40729 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000154784s
	[INFO] 10.244.0.21:43843 - 42881 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003955152s
	[INFO] 10.244.0.21:40670 - 1548 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003206389s
	[INFO] 10.244.0.21:47293 - 52086 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001568986s
	[INFO] 10.244.0.21:43737 - 49819 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002448452s
	
	
	==> describe nodes <==
	Name:               addons-241520
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-241520
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=addons-241520
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_10_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-241520
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-241520"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:10:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-241520
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:12:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:12:03 +0000   Wed, 10 Dec 2025 06:10:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:12:03 +0000   Wed, 10 Dec 2025 06:10:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:12:03 +0000   Wed, 10 Dec 2025 06:10:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:12:03 +0000   Wed, 10 Dec 2025 06:10:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-241520
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                6d39ab4b-4b9e-4f06-8c01-e4cbe723bf1a
	  Boot ID:                    7e517eb4-cdae-4e97-a158-8132b5e595bf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  default                     cloud-spanner-emulator-5bdddb765-fb462       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  gadget                      gadget-2srh4                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  gcp-auth                    gcp-auth-78565c9fb4-744nw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-kqczr    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         86s
	  kube-system                 coredns-66bc5c9577-ds7m5                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     94s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 csi-hostpathplugin-qf6mx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 etcd-addons-241520                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         100s
	  kube-system                 kindnet-h9tr4                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      94s
	  kube-system                 kube-apiserver-addons-241520                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-controller-manager-addons-241520        200m (10%)    0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-proxy-srgdx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-scheduler-addons-241520                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 metrics-server-85b7d694d7-rwcgk              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         87s
	  kube-system                 nvidia-device-plugin-daemonset-qbztj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 registry-6b586f9694-jv6bp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 registry-creds-764b6fb674-xsqwd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 registry-proxy-pfbv5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 snapshot-controller-7d9fbc56b8-8pzhn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 snapshot-controller-7d9fbc56b8-qgnvq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  local-path-storage          local-path-provisioner-648f6765c9-kvsb5      0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-v86q9               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 92s                  kube-proxy       
	  Normal   Starting                 108s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 108s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  107s (x8 over 107s)  kubelet          Node addons-241520 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    107s (x8 over 107s)  kubelet          Node addons-241520 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     107s (x8 over 107s)  kubelet          Node addons-241520 status is now: NodeHasSufficientPID
	  Normal   Starting                 100s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 100s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  100s                 kubelet          Node addons-241520 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    100s                 kubelet          Node addons-241520 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     100s                 kubelet          Node addons-241520 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           95s                  node-controller  Node addons-241520 event: Registered Node addons-241520 in Controller
	  Normal   NodeReady                78s                  kubelet          Node addons-241520 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec10 03:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015165] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514331] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032961] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807794] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.310854] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 04:51] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000001f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000bd7dad86{9P.session} n=00000000db0bc116
	[  +0.001142] FS-Cache: O-key=[10] '34323936323939323530'
	[  +0.000773] FS-Cache: N-cookie c=00000020 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000bd7dad86{9P.session} n=00000000040fb2e3
	[  +0.001079] FS-Cache: N-key=[10] '34323936323939323530'
	[Dec10 04:56] hrtimer: interrupt took 8286122 ns
	[Dec10 06:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 06:10] overlayfs: idmapped layers are currently not supported
	[ +21.611451] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f] <==
	{"level":"warn","ts":"2025-12-10T06:10:26.912675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:26.934694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:26.993601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.031727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.059084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.093715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.116616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.149948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.168659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.192011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.255274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.265985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.310462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.320470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.359137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.394001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.415382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.440035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:27.624900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:46.601060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:46.624556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:57.696811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:57.720685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:57.749093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:10:57.758100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44356","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [791a0461acaf48e79d524ccda615028355ee7e8a80133011ecbf61a56f7b35c8] <==
	2025/12/10 06:11:49 GCP Auth Webhook started!
	2025/12/10 06:11:58 Ready to marshal response ...
	2025/12/10 06:11:58 Ready to write response ...
	2025/12/10 06:11:58 Ready to marshal response ...
	2025/12/10 06:11:58 Ready to write response ...
	2025/12/10 06:11:58 Ready to marshal response ...
	2025/12/10 06:11:58 Ready to write response ...
	
	
	==> kernel <==
	 06:12:11 up  2:54,  0 user,  load average: 3.54, 2.42, 1.82
	Linux addons-241520 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7] <==
	I1210 06:10:43.203721       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:10:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:10:43.460504       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:10:43.460534       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:10:43.460545       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:10:43.461050       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:10:43.661295       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:10:43.661328       1 metrics.go:72] Registering metrics
	I1210 06:10:43.661385       1 controller.go:711] "Syncing nftables rules"
	I1210 06:10:53.447254       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:10:53.447326       1 main.go:301] handling current node
	I1210 06:11:03.447836       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:11:03.447970       1 main.go:301] handling current node
	I1210 06:11:13.447917       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:11:13.447960       1 main.go:301] handling current node
	I1210 06:11:23.447818       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:11:23.447847       1 main.go:301] handling current node
	I1210 06:11:33.447190       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:11:33.447256       1 main.go:301] handling current node
	I1210 06:11:43.447880       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:11:43.447913       1 main.go:301] handling current node
	I1210 06:11:53.447575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:11:53.447628       1 main.go:301] handling current node
	I1210 06:12:03.447765       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 06:12:03.447800       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554] <==
	I1210 06:10:46.149756       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.105.213.240"}
	W1210 06:10:46.585538       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1210 06:10:46.613044       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1210 06:10:46.786296       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.101.152.36"}
	W1210 06:10:53.956368       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.152.36:443: connect: connection refused
	E1210 06:10:53.956415       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.152.36:443: connect: connection refused" logger="UnhandledError"
	W1210 06:10:53.956980       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.152.36:443: connect: connection refused
	E1210 06:10:53.957015       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.152.36:443: connect: connection refused" logger="UnhandledError"
	W1210 06:10:54.085228       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.152.36:443: connect: connection refused
	E1210 06:10:54.085270       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.152.36:443: connect: connection refused" logger="UnhandledError"
	W1210 06:10:57.694308       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 06:10:57.713610       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 06:10:57.742283       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1210 06:10:57.757833       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1210 06:11:04.380545       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.194.81:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.194.81:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.194.81:443: connect: connection refused" logger="UnhandledError"
	W1210 06:11:04.380890       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 06:11:04.380961       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 06:11:04.383579       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.194.81:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.194.81:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.194.81:443: connect: connection refused" logger="UnhandledError"
	E1210 06:11:04.390496       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.194.81:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.194.81:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.194.81:443: connect: connection refused" logger="UnhandledError"
	I1210 06:11:04.535045       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1210 06:12:09.211961       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46048: use of closed network connection
	E1210 06:12:09.348795       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46068: use of closed network connection
	
	
	==> kube-controller-manager [ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272] <==
	I1210 06:10:36.503150       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 06:10:36.503162       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 06:10:36.503292       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 06:10:36.503371       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-241520"
	I1210 06:10:36.503417       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1210 06:10:36.504611       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:10:36.511995       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 06:10:36.512120       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:10:36.529055       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 06:10:36.529244       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 06:10:36.529717       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 06:10:36.530897       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 06:10:36.530939       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 06:10:36.531013       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 06:10:36.531059       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 06:10:36.531537       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 06:10:36.537878       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 06:10:36.540306       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 06:10:36.547675       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 06:10:56.506320       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1210 06:11:06.489331       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1210 06:11:06.489383       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1210 06:11:06.521862       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1210 06:11:06.590330       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:11:06.622323       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f] <==
	I1210 06:10:38.602643       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:10:38.723132       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:10:38.823486       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:10:38.823521       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1210 06:10:38.823587       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:10:38.882875       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:10:38.882923       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:10:38.892975       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:10:38.893523       1 server.go:527] "Version info" version="v1.34.3"
	I1210 06:10:38.893545       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:10:38.895413       1 config.go:200] "Starting service config controller"
	I1210 06:10:38.895442       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:10:38.895461       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:10:38.895465       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:10:38.895493       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:10:38.895497       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:10:38.896141       1 config.go:309] "Starting node config controller"
	I1210 06:10:38.896159       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:10:38.896165       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:10:38.995602       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:10:38.995643       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:10:38.995686       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af] <==
	E1210 06:10:28.862939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1210 06:10:28.863077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 06:10:28.863127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:10:28.863171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 06:10:28.869050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 06:10:28.869316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:10:28.869375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:10:28.869432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:10:28.869486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 06:10:28.869587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 06:10:28.869627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 06:10:28.872344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:10:28.872414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 06:10:28.872464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 06:10:28.872533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 06:10:28.872616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:10:28.872738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:10:28.872792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 06:10:28.872891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:10:29.686570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:10:29.793465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:10:29.804217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:10:29.957405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:10:30.456965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1210 06:10:32.956813       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:11:20 addons-241520 kubelet[1999]: I1210 06:11:20.852280    1999 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="023e1f095d7faa9a163c113bc28397a29e898ef11ad14d9c39ff816304e66867"
	Dec 10 06:11:20 addons-241520 kubelet[1999]: I1210 06:11:20.944869    1999 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v7p8\" (UniqueName: \"kubernetes.io/projected/ff62625e-49bc-4b62-a560-482eb25b0a01-kube-api-access-8v7p8\") pod \"ff62625e-49bc-4b62-a560-482eb25b0a01\" (UID: \"ff62625e-49bc-4b62-a560-482eb25b0a01\") "
	Dec 10 06:11:20 addons-241520 kubelet[1999]: I1210 06:11:20.955164    1999 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff62625e-49bc-4b62-a560-482eb25b0a01-kube-api-access-8v7p8" (OuterVolumeSpecName: "kube-api-access-8v7p8") pod "ff62625e-49bc-4b62-a560-482eb25b0a01" (UID: "ff62625e-49bc-4b62-a560-482eb25b0a01"). InnerVolumeSpecName "kube-api-access-8v7p8". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 10 06:11:21 addons-241520 kubelet[1999]: I1210 06:11:21.046242    1999 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8v7p8\" (UniqueName: \"kubernetes.io/projected/ff62625e-49bc-4b62-a560-482eb25b0a01-kube-api-access-8v7p8\") on node \"addons-241520\" DevicePath \"\""
	Dec 10 06:11:21 addons-241520 kubelet[1999]: I1210 06:11:21.881331    1999 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-pfbv5" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 06:11:21 addons-241520 kubelet[1999]: I1210 06:11:21.926644    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-pfbv5" podStartSLOduration=3.6009060120000003 podStartE2EDuration="28.926626741s" podCreationTimestamp="2025-12-10 06:10:53 +0000 UTC" firstStartedPulling="2025-12-10 06:10:55.601713217 +0000 UTC m=+24.379944260" lastFinishedPulling="2025-12-10 06:11:20.927433946 +0000 UTC m=+49.705664989" observedRunningTime="2025-12-10 06:11:21.925636578 +0000 UTC m=+50.703867629" watchObservedRunningTime="2025-12-10 06:11:21.926626741 +0000 UTC m=+50.704857792"
	Dec 10 06:11:22 addons-241520 kubelet[1999]: I1210 06:11:22.883766    1999 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-pfbv5" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 06:11:25 addons-241520 kubelet[1999]: E1210 06:11:25.892028    1999 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 10 06:11:25 addons-241520 kubelet[1999]: E1210 06:11:25.892116    1999 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb826bb4-0c23-4177-9e45-94b0b2c61e46-gcr-creds podName:bb826bb4-0c23-4177-9e45-94b0b2c61e46 nodeName:}" failed. No retries permitted until 2025-12-10 06:11:57.892095874 +0000 UTC m=+86.670326925 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/bb826bb4-0c23-4177-9e45-94b0b2c61e46-gcr-creds") pod "registry-creds-764b6fb674-xsqwd" (UID: "bb826bb4-0c23-4177-9e45-94b0b2c61e46") : secret "registry-creds-gcr" not found
	Dec 10 06:11:34 addons-241520 kubelet[1999]: I1210 06:11:34.962419    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-ingress-dns-minikube" podStartSLOduration=18.992121986 podStartE2EDuration="51.96240116s" podCreationTimestamp="2025-12-10 06:10:43 +0000 UTC" firstStartedPulling="2025-12-10 06:10:56.328895048 +0000 UTC m=+25.107126091" lastFinishedPulling="2025-12-10 06:11:29.299174214 +0000 UTC m=+58.077405265" observedRunningTime="2025-12-10 06:11:29.943182182 +0000 UTC m=+58.721413233" watchObservedRunningTime="2025-12-10 06:11:34.96240116 +0000 UTC m=+63.740632203"
	Dec 10 06:11:35 addons-241520 kubelet[1999]: I1210 06:11:35.975363    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="local-path-storage/local-path-provisioner-648f6765c9-kvsb5" podStartSLOduration=13.244092483 podStartE2EDuration="51.975341342s" podCreationTimestamp="2025-12-10 06:10:44 +0000 UTC" firstStartedPulling="2025-12-10 06:10:56.343733185 +0000 UTC m=+25.121964227" lastFinishedPulling="2025-12-10 06:11:35.074982043 +0000 UTC m=+63.853213086" observedRunningTime="2025-12-10 06:11:35.974379488 +0000 UTC m=+64.752610531" watchObservedRunningTime="2025-12-10 06:11:35.975341342 +0000 UTC m=+64.753572385"
	Dec 10 06:11:35 addons-241520 kubelet[1999]: I1210 06:11:35.975638    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-v86q9" podStartSLOduration=14.404275901 podStartE2EDuration="51.975630685s" podCreationTimestamp="2025-12-10 06:10:44 +0000 UTC" firstStartedPulling="2025-12-10 06:10:56.343413253 +0000 UTC m=+25.121644296" lastFinishedPulling="2025-12-10 06:11:33.914768037 +0000 UTC m=+62.692999080" observedRunningTime="2025-12-10 06:11:34.96673675 +0000 UTC m=+63.744967809" watchObservedRunningTime="2025-12-10 06:11:35.975630685 +0000 UTC m=+64.753861727"
	Dec 10 06:11:38 addons-241520 kubelet[1999]: I1210 06:11:38.989387    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-2srh4" podStartSLOduration=17.995374473 podStartE2EDuration="54.989366238s" podCreationTimestamp="2025-12-10 06:10:44 +0000 UTC" firstStartedPulling="2025-12-10 06:11:01.731382291 +0000 UTC m=+30.509613342" lastFinishedPulling="2025-12-10 06:11:38.725374064 +0000 UTC m=+67.503605107" observedRunningTime="2025-12-10 06:11:38.986318662 +0000 UTC m=+67.764549704" watchObservedRunningTime="2025-12-10 06:11:38.989366238 +0000 UTC m=+67.767597281"
	Dec 10 06:11:41 addons-241520 kubelet[1999]: I1210 06:11:41.354915    1999 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da72af44-9d25-406b-9dca-aa9e33852fe8" path="/var/lib/kubelet/pods/da72af44-9d25-406b-9dca-aa9e33852fe8/volumes"
	Dec 10 06:11:50 addons-241520 kubelet[1999]: I1210 06:11:50.060355    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-kqczr" podStartSLOduration=28.893320582 podStartE2EDuration="1m5.060335164s" podCreationTimestamp="2025-12-10 06:10:45 +0000 UTC" firstStartedPulling="2025-12-10 06:11:09.944981443 +0000 UTC m=+38.723212486" lastFinishedPulling="2025-12-10 06:11:46.111996025 +0000 UTC m=+74.890227068" observedRunningTime="2025-12-10 06:11:47.041299774 +0000 UTC m=+75.819530842" watchObservedRunningTime="2025-12-10 06:11:50.060335164 +0000 UTC m=+78.838566207"
	Dec 10 06:11:51 addons-241520 kubelet[1999]: I1210 06:11:51.038314    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-744nw" podStartSLOduration=27.179952143 podStartE2EDuration="1m5.038292878s" podCreationTimestamp="2025-12-10 06:10:46 +0000 UTC" firstStartedPulling="2025-12-10 06:11:11.21568425 +0000 UTC m=+39.993915293" lastFinishedPulling="2025-12-10 06:11:49.074024985 +0000 UTC m=+77.852256028" observedRunningTime="2025-12-10 06:11:50.060805769 +0000 UTC m=+78.839036820" watchObservedRunningTime="2025-12-10 06:11:51.038292878 +0000 UTC m=+79.816523921"
	Dec 10 06:11:51 addons-241520 kubelet[1999]: I1210 06:11:51.345208    1999 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff62625e-49bc-4b62-a560-482eb25b0a01" path="/var/lib/kubelet/pods/ff62625e-49bc-4b62-a560-482eb25b0a01/volumes"
	Dec 10 06:11:52 addons-241520 kubelet[1999]: I1210 06:11:52.515554    1999 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 10 06:11:52 addons-241520 kubelet[1999]: I1210 06:11:52.515602    1999 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 10 06:11:57 addons-241520 kubelet[1999]: E1210 06:11:57.905689    1999 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 10 06:11:57 addons-241520 kubelet[1999]: E1210 06:11:57.913174    1999 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb826bb4-0c23-4177-9e45-94b0b2c61e46-gcr-creds podName:bb826bb4-0c23-4177-9e45-94b0b2c61e46 nodeName:}" failed. No retries permitted until 2025-12-10 06:13:01.913142676 +0000 UTC m=+150.691373727 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/bb826bb4-0c23-4177-9e45-94b0b2c61e46-gcr-creds") pod "registry-creds-764b6fb674-xsqwd" (UID: "bb826bb4-0c23-4177-9e45-94b0b2c61e46") : secret "registry-creds-gcr" not found
	Dec 10 06:11:58 addons-241520 kubelet[1999]: I1210 06:11:58.050088    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-qf6mx" podStartSLOduration=3.457395012 podStartE2EDuration="1m4.050064779s" podCreationTimestamp="2025-12-10 06:10:54 +0000 UTC" firstStartedPulling="2025-12-10 06:10:54.986661723 +0000 UTC m=+23.764892774" lastFinishedPulling="2025-12-10 06:11:55.579331498 +0000 UTC m=+84.357562541" observedRunningTime="2025-12-10 06:11:56.128771185 +0000 UTC m=+84.907002228" watchObservedRunningTime="2025-12-10 06:11:58.050064779 +0000 UTC m=+86.828295822"
	Dec 10 06:11:58 addons-241520 kubelet[1999]: I1210 06:11:58.814231    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw888\" (UniqueName: \"kubernetes.io/projected/3ccff718-6015-45fc-bd06-1b60258f39ae-kube-api-access-xw888\") pod \"busybox\" (UID: \"3ccff718-6015-45fc-bd06-1b60258f39ae\") " pod="default/busybox"
	Dec 10 06:11:58 addons-241520 kubelet[1999]: I1210 06:11:58.814521    1999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3ccff718-6015-45fc-bd06-1b60258f39ae-gcp-creds\") pod \"busybox\" (UID: \"3ccff718-6015-45fc-bd06-1b60258f39ae\") " pod="default/busybox"
	Dec 10 06:12:02 addons-241520 kubelet[1999]: I1210 06:12:02.158465    1999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.9830835310000001 podStartE2EDuration="4.158447998s" podCreationTimestamp="2025-12-10 06:11:58 +0000 UTC" firstStartedPulling="2025-12-10 06:11:59.036250878 +0000 UTC m=+87.814481921" lastFinishedPulling="2025-12-10 06:12:01.211615345 +0000 UTC m=+89.989846388" observedRunningTime="2025-12-10 06:12:02.15803241 +0000 UTC m=+90.936263469" watchObservedRunningTime="2025-12-10 06:12:02.158447998 +0000 UTC m=+90.936679082"
	
	
	==> storage-provisioner [75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e] <==
	W1210 06:11:47.170427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:11:49.181439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:11:49.190292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:11:51.194375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:11:51.200128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:11:53.202992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:11:53.209904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:11:55.220097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:11:55.228792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:11:57.232691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:11:57.237705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:11:59.240460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:11:59.246206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:12:01.251485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:12:01.259016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:12:03.262276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:12:03.267267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:12:05.270864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:12:05.275419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:12:07.279176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:12:07.283780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:12:09.287603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:12:09.293966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:12:11.298075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:12:11.309090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-241520 -n addons-241520
helpers_test.go:270: (dbg) Run:  kubectl --context addons-241520 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-hzj5c ingress-nginx-admission-patch-pvxz6 registry-creds-764b6fb674-xsqwd
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-241520 describe pod ingress-nginx-admission-create-hzj5c ingress-nginx-admission-patch-pvxz6 registry-creds-764b6fb674-xsqwd
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-241520 describe pod ingress-nginx-admission-create-hzj5c ingress-nginx-admission-patch-pvxz6 registry-creds-764b6fb674-xsqwd: exit status 1 (87.845635ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hzj5c" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-pvxz6" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-xsqwd" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-241520 describe pod ingress-nginx-admission-create-hzj5c ingress-nginx-admission-patch-pvxz6 registry-creds-764b6fb674-xsqwd: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-241520 addons disable headlamp --alsologtostderr -v=1: exit status 11 (261.987133ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:12:12.695925  372826 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:12:12.696758  372826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:12.696796  372826 out.go:374] Setting ErrFile to fd 2...
	I1210 06:12:12.696817  372826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:12.697119  372826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:12:12.697539  372826 mustload.go:66] Loading cluster: addons-241520
	I1210 06:12:12.697991  372826 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:12.698033  372826 addons.go:622] checking whether the cluster is paused
	I1210 06:12:12.698190  372826 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:12.698223  372826 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:12:12.698775  372826 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:12:12.716315  372826 ssh_runner.go:195] Run: systemctl --version
	I1210 06:12:12.716387  372826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:12:12.735795  372826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:12:12.843725  372826 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:12:12.843812  372826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:12:12.874384  372826 cri.go:89] found id: "e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea"
	I1210 06:12:12.874427  372826 cri.go:89] found id: "15f67f2cc14b2c44db28ba03b966823a38ba5501db1d7b2f9c2e0dacfd4ed1b1"
	I1210 06:12:12.874433  372826 cri.go:89] found id: "706727cbc03fa4d0a9bfef0bf8e680d6d87db98a72d0d1e64871123e56c84474"
	I1210 06:12:12.874438  372826 cri.go:89] found id: "32f373b06842fe1cd3594c2e992c1a68b946614f87ec5014e49bd7b5c6fc9597"
	I1210 06:12:12.874442  372826 cri.go:89] found id: "cc2087138549b52903121596d8e7319535780a3b85708b719e3170b8e961b18c"
	I1210 06:12:12.874445  372826 cri.go:89] found id: "3edcc847365a8156b3cc6d2fd2aefa985fe161b63c86a2c627e7e5d5b700bbb3"
	I1210 06:12:12.874448  372826 cri.go:89] found id: "e9f72b624d9a04a42ed512b19992407ac0e1076bf5f9d035f9f7ea79d5b54533"
	I1210 06:12:12.874452  372826 cri.go:89] found id: "b3d13279bb1f98801f86c5a0bc4f1d1818687f61d834d6108b96a51de639d2ad"
	I1210 06:12:12.874455  372826 cri.go:89] found id: "b11ba380657e90d096767d69b45a8f8c192d3575bd2bb1300a880a5e46be0218"
	I1210 06:12:12.874462  372826 cri.go:89] found id: "7c4c997d687b5587da6adee9ccf8eefbacea6a48c77c823e1449a7aeccb74825"
	I1210 06:12:12.874466  372826 cri.go:89] found id: "7cf2b1b068ab59188e8c5fb5b292a128274569b06b59310a10102bb992ec0ee8"
	I1210 06:12:12.874469  372826 cri.go:89] found id: "ec50b512c8e8c3fe1f2a4815e23cd7b44032538e1b51724eaf4397367fde1033"
	I1210 06:12:12.874481  372826 cri.go:89] found id: "fcb9b12f636ffd5f790e8e1e75b50ea39f36342e5170530163aa43601f9a319f"
	I1210 06:12:12.874484  372826 cri.go:89] found id: "5bf1539b0ce43fe90678001281ab38e24a894f5137e22c376c0ddbda31d3a327"
	I1210 06:12:12.874487  372826 cri.go:89] found id: "ccf62bd56b5d147a9c3fd61622c7c024ac32876116a8d90e388b5ab69b50d5b8"
	I1210 06:12:12.874494  372826 cri.go:89] found id: "c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749"
	I1210 06:12:12.874501  372826 cri.go:89] found id: "75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e"
	I1210 06:12:12.874508  372826 cri.go:89] found id: "c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7"
	I1210 06:12:12.874511  372826 cri.go:89] found id: "ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f"
	I1210 06:12:12.874514  372826 cri.go:89] found id: "a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af"
	I1210 06:12:12.874519  372826 cri.go:89] found id: "a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554"
	I1210 06:12:12.874524  372826 cri.go:89] found id: "ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272"
	I1210 06:12:12.874527  372826 cri.go:89] found id: "88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f"
	I1210 06:12:12.874530  372826 cri.go:89] found id: ""
	I1210 06:12:12.874586  372826 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:12:12.890545  372826 out.go:203] 
	W1210 06:12:12.893434  372826 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:12:12.893459  372826 out.go:285] * 
	* 
	W1210 06:12:12.898581  372826 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:12:12.901593  372826 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-241520 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.29s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-fb462" [0840ee6c-735f-43a8-9c27-ce0ad07d0b12] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004161814s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-241520 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (267.982144ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:12:29.615981  373270 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:12:29.616866  373270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:29.616884  373270 out.go:374] Setting ErrFile to fd 2...
	I1210 06:12:29.616891  373270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:29.617259  373270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:12:29.617616  373270 mustload.go:66] Loading cluster: addons-241520
	I1210 06:12:29.618073  373270 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:29.618095  373270 addons.go:622] checking whether the cluster is paused
	I1210 06:12:29.618241  373270 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:29.618262  373270 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:12:29.618836  373270 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:12:29.639248  373270 ssh_runner.go:195] Run: systemctl --version
	I1210 06:12:29.639314  373270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:12:29.657984  373270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:12:29.764128  373270 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:12:29.764221  373270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:12:29.796192  373270 cri.go:89] found id: "e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea"
	I1210 06:12:29.796224  373270 cri.go:89] found id: "15f67f2cc14b2c44db28ba03b966823a38ba5501db1d7b2f9c2e0dacfd4ed1b1"
	I1210 06:12:29.796230  373270 cri.go:89] found id: "706727cbc03fa4d0a9bfef0bf8e680d6d87db98a72d0d1e64871123e56c84474"
	I1210 06:12:29.796233  373270 cri.go:89] found id: "32f373b06842fe1cd3594c2e992c1a68b946614f87ec5014e49bd7b5c6fc9597"
	I1210 06:12:29.796237  373270 cri.go:89] found id: "cc2087138549b52903121596d8e7319535780a3b85708b719e3170b8e961b18c"
	I1210 06:12:29.796240  373270 cri.go:89] found id: "3edcc847365a8156b3cc6d2fd2aefa985fe161b63c86a2c627e7e5d5b700bbb3"
	I1210 06:12:29.796244  373270 cri.go:89] found id: "e9f72b624d9a04a42ed512b19992407ac0e1076bf5f9d035f9f7ea79d5b54533"
	I1210 06:12:29.796247  373270 cri.go:89] found id: "b3d13279bb1f98801f86c5a0bc4f1d1818687f61d834d6108b96a51de639d2ad"
	I1210 06:12:29.796250  373270 cri.go:89] found id: "b11ba380657e90d096767d69b45a8f8c192d3575bd2bb1300a880a5e46be0218"
	I1210 06:12:29.796260  373270 cri.go:89] found id: "7c4c997d687b5587da6adee9ccf8eefbacea6a48c77c823e1449a7aeccb74825"
	I1210 06:12:29.796268  373270 cri.go:89] found id: "7cf2b1b068ab59188e8c5fb5b292a128274569b06b59310a10102bb992ec0ee8"
	I1210 06:12:29.796271  373270 cri.go:89] found id: "ec50b512c8e8c3fe1f2a4815e23cd7b44032538e1b51724eaf4397367fde1033"
	I1210 06:12:29.796274  373270 cri.go:89] found id: "fcb9b12f636ffd5f790e8e1e75b50ea39f36342e5170530163aa43601f9a319f"
	I1210 06:12:29.796278  373270 cri.go:89] found id: "5bf1539b0ce43fe90678001281ab38e24a894f5137e22c376c0ddbda31d3a327"
	I1210 06:12:29.796281  373270 cri.go:89] found id: "ccf62bd56b5d147a9c3fd61622c7c024ac32876116a8d90e388b5ab69b50d5b8"
	I1210 06:12:29.796288  373270 cri.go:89] found id: "c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749"
	I1210 06:12:29.796296  373270 cri.go:89] found id: "75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e"
	I1210 06:12:29.796301  373270 cri.go:89] found id: "c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7"
	I1210 06:12:29.796304  373270 cri.go:89] found id: "ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f"
	I1210 06:12:29.796308  373270 cri.go:89] found id: "a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af"
	I1210 06:12:29.796332  373270 cri.go:89] found id: "a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554"
	I1210 06:12:29.796341  373270 cri.go:89] found id: "ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272"
	I1210 06:12:29.796345  373270 cri.go:89] found id: "88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f"
	I1210 06:12:29.796348  373270 cri.go:89] found id: ""
	I1210 06:12:29.796400  373270 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:12:29.813835  373270 out.go:203] 
	W1210 06:12:29.816973  373270 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:12:29.817003  373270 out.go:285] * 
	* 
	W1210 06:12:29.822120  373270 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:12:29.825127  373270 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-241520 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.47s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-241520 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-241520 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-241520 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [13349c61-36f3-4a1f-8b43-2940c573bdc6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [13349c61-36f3-4a1f-8b43-2940c573bdc6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [13349c61-36f3-4a1f-8b43-2940c573bdc6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003974473s
addons_test.go:969: (dbg) Run:  kubectl --context addons-241520 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 ssh "cat /opt/local-path-provisioner/pvc-0d09bd84-80dd-472f-be69-d05ede5b8612_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-241520 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-241520 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-241520 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (265.863603ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:12:34.075656  373449 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:12:34.076519  373449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:34.076536  373449 out.go:374] Setting ErrFile to fd 2...
	I1210 06:12:34.076542  373449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:34.076844  373449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:12:34.077248  373449 mustload.go:66] Loading cluster: addons-241520
	I1210 06:12:34.077734  373449 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:34.077759  373449 addons.go:622] checking whether the cluster is paused
	I1210 06:12:34.077922  373449 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:34.077950  373449 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:12:34.078790  373449 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:12:34.098062  373449 ssh_runner.go:195] Run: systemctl --version
	I1210 06:12:34.098196  373449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:12:34.116579  373449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:12:34.223782  373449 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:12:34.223875  373449 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:12:34.253332  373449 cri.go:89] found id: "e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea"
	I1210 06:12:34.253415  373449 cri.go:89] found id: "15f67f2cc14b2c44db28ba03b966823a38ba5501db1d7b2f9c2e0dacfd4ed1b1"
	I1210 06:12:34.253437  373449 cri.go:89] found id: "706727cbc03fa4d0a9bfef0bf8e680d6d87db98a72d0d1e64871123e56c84474"
	I1210 06:12:34.253457  373449 cri.go:89] found id: "32f373b06842fe1cd3594c2e992c1a68b946614f87ec5014e49bd7b5c6fc9597"
	I1210 06:12:34.253489  373449 cri.go:89] found id: "cc2087138549b52903121596d8e7319535780a3b85708b719e3170b8e961b18c"
	I1210 06:12:34.253510  373449 cri.go:89] found id: "3edcc847365a8156b3cc6d2fd2aefa985fe161b63c86a2c627e7e5d5b700bbb3"
	I1210 06:12:34.253530  373449 cri.go:89] found id: "e9f72b624d9a04a42ed512b19992407ac0e1076bf5f9d035f9f7ea79d5b54533"
	I1210 06:12:34.253550  373449 cri.go:89] found id: "b3d13279bb1f98801f86c5a0bc4f1d1818687f61d834d6108b96a51de639d2ad"
	I1210 06:12:34.253584  373449 cri.go:89] found id: "b11ba380657e90d096767d69b45a8f8c192d3575bd2bb1300a880a5e46be0218"
	I1210 06:12:34.253605  373449 cri.go:89] found id: "7c4c997d687b5587da6adee9ccf8eefbacea6a48c77c823e1449a7aeccb74825"
	I1210 06:12:34.253623  373449 cri.go:89] found id: "7cf2b1b068ab59188e8c5fb5b292a128274569b06b59310a10102bb992ec0ee8"
	I1210 06:12:34.253642  373449 cri.go:89] found id: "ec50b512c8e8c3fe1f2a4815e23cd7b44032538e1b51724eaf4397367fde1033"
	I1210 06:12:34.253671  373449 cri.go:89] found id: "fcb9b12f636ffd5f790e8e1e75b50ea39f36342e5170530163aa43601f9a319f"
	I1210 06:12:34.253691  373449 cri.go:89] found id: "5bf1539b0ce43fe90678001281ab38e24a894f5137e22c376c0ddbda31d3a327"
	I1210 06:12:34.253710  373449 cri.go:89] found id: "ccf62bd56b5d147a9c3fd61622c7c024ac32876116a8d90e388b5ab69b50d5b8"
	I1210 06:12:34.253733  373449 cri.go:89] found id: "c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749"
	I1210 06:12:34.253777  373449 cri.go:89] found id: "75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e"
	I1210 06:12:34.253795  373449 cri.go:89] found id: "c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7"
	I1210 06:12:34.253812  373449 cri.go:89] found id: "ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f"
	I1210 06:12:34.253837  373449 cri.go:89] found id: "a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af"
	I1210 06:12:34.253865  373449 cri.go:89] found id: "a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554"
	I1210 06:12:34.253884  373449 cri.go:89] found id: "ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272"
	I1210 06:12:34.253914  373449 cri.go:89] found id: "88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f"
	I1210 06:12:34.253934  373449 cri.go:89] found id: ""
	I1210 06:12:34.254039  373449 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:12:34.269890  373449 out.go:203] 
	W1210 06:12:34.272809  373449 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:12:34.272835  373449 out.go:285] * 
	* 
	W1210 06:12:34.277912  373449 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:12:34.281506  373449 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-241520 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.47s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.36s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-qbztj" [05046e52-8283-4767-9fb0-3ea35511d095] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006907956s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-241520 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (347.406193ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:12:24.324056  373113 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:12:24.329432  373113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:24.329460  373113 out.go:374] Setting ErrFile to fd 2...
	I1210 06:12:24.329467  373113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:24.329777  373113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:12:24.330134  373113 mustload.go:66] Loading cluster: addons-241520
	I1210 06:12:24.330736  373113 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:24.330757  373113 addons.go:622] checking whether the cluster is paused
	I1210 06:12:24.333171  373113 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:24.333220  373113 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:12:24.333788  373113 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:12:24.351615  373113 ssh_runner.go:195] Run: systemctl --version
	I1210 06:12:24.351679  373113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:12:24.369950  373113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:12:24.480870  373113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:12:24.480988  373113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:12:24.517585  373113 cri.go:89] found id: "e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea"
	I1210 06:12:24.517623  373113 cri.go:89] found id: "15f67f2cc14b2c44db28ba03b966823a38ba5501db1d7b2f9c2e0dacfd4ed1b1"
	I1210 06:12:24.517629  373113 cri.go:89] found id: "706727cbc03fa4d0a9bfef0bf8e680d6d87db98a72d0d1e64871123e56c84474"
	I1210 06:12:24.517633  373113 cri.go:89] found id: "32f373b06842fe1cd3594c2e992c1a68b946614f87ec5014e49bd7b5c6fc9597"
	I1210 06:12:24.517637  373113 cri.go:89] found id: "cc2087138549b52903121596d8e7319535780a3b85708b719e3170b8e961b18c"
	I1210 06:12:24.517641  373113 cri.go:89] found id: "3edcc847365a8156b3cc6d2fd2aefa985fe161b63c86a2c627e7e5d5b700bbb3"
	I1210 06:12:24.517644  373113 cri.go:89] found id: "e9f72b624d9a04a42ed512b19992407ac0e1076bf5f9d035f9f7ea79d5b54533"
	I1210 06:12:24.517647  373113 cri.go:89] found id: "b3d13279bb1f98801f86c5a0bc4f1d1818687f61d834d6108b96a51de639d2ad"
	I1210 06:12:24.517650  373113 cri.go:89] found id: "b11ba380657e90d096767d69b45a8f8c192d3575bd2bb1300a880a5e46be0218"
	I1210 06:12:24.517684  373113 cri.go:89] found id: "7c4c997d687b5587da6adee9ccf8eefbacea6a48c77c823e1449a7aeccb74825"
	I1210 06:12:24.517695  373113 cri.go:89] found id: "7cf2b1b068ab59188e8c5fb5b292a128274569b06b59310a10102bb992ec0ee8"
	I1210 06:12:24.517699  373113 cri.go:89] found id: "ec50b512c8e8c3fe1f2a4815e23cd7b44032538e1b51724eaf4397367fde1033"
	I1210 06:12:24.517702  373113 cri.go:89] found id: "fcb9b12f636ffd5f790e8e1e75b50ea39f36342e5170530163aa43601f9a319f"
	I1210 06:12:24.517706  373113 cri.go:89] found id: "5bf1539b0ce43fe90678001281ab38e24a894f5137e22c376c0ddbda31d3a327"
	I1210 06:12:24.517709  373113 cri.go:89] found id: "ccf62bd56b5d147a9c3fd61622c7c024ac32876116a8d90e388b5ab69b50d5b8"
	I1210 06:12:24.517714  373113 cri.go:89] found id: "c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749"
	I1210 06:12:24.517724  373113 cri.go:89] found id: "75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e"
	I1210 06:12:24.517728  373113 cri.go:89] found id: "c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7"
	I1210 06:12:24.517730  373113 cri.go:89] found id: "ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f"
	I1210 06:12:24.517734  373113 cri.go:89] found id: "a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af"
	I1210 06:12:24.517756  373113 cri.go:89] found id: "a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554"
	I1210 06:12:24.517766  373113 cri.go:89] found id: "ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272"
	I1210 06:12:24.517770  373113 cri.go:89] found id: "88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f"
	I1210 06:12:24.517773  373113 cri.go:89] found id: ""
	I1210 06:12:24.517851  373113 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:12:24.535367  373113 out.go:203] 
	W1210 06:12:24.538512  373113 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:12:24.538541  373113 out.go:285] * 
	* 
	W1210 06:12:24.543746  373113 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:12:24.546736  373113 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-241520 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.36s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-v86q9" [39a26b7e-2815-40ca-8b55-3d8f1a50a587] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003781628s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-241520 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-241520 addons disable yakd --alsologtostderr -v=1: exit status 11 (280.328541ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:12:17.968462  372888 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:12:17.969304  372888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:17.969326  372888 out.go:374] Setting ErrFile to fd 2...
	I1210 06:12:17.969331  372888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:12:17.969625  372888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:12:17.969964  372888 mustload.go:66] Loading cluster: addons-241520
	I1210 06:12:17.970365  372888 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:17.970386  372888 addons.go:622] checking whether the cluster is paused
	I1210 06:12:17.970499  372888 config.go:182] Loaded profile config "addons-241520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:12:17.970516  372888 host.go:66] Checking if "addons-241520" exists ...
	I1210 06:12:17.971046  372888 cli_runner.go:164] Run: docker container inspect addons-241520 --format={{.State.Status}}
	I1210 06:12:17.994376  372888 ssh_runner.go:195] Run: systemctl --version
	I1210 06:12:17.994458  372888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-241520
	I1210 06:12:18.022619  372888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/addons-241520/id_rsa Username:docker}
	I1210 06:12:18.132054  372888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:12:18.132192  372888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:12:18.163442  372888 cri.go:89] found id: "e3e105248b47de107c09caeb904eedf4e2cac5b45cbd71d687fa3a45fee727ea"
	I1210 06:12:18.163466  372888 cri.go:89] found id: "15f67f2cc14b2c44db28ba03b966823a38ba5501db1d7b2f9c2e0dacfd4ed1b1"
	I1210 06:12:18.163472  372888 cri.go:89] found id: "706727cbc03fa4d0a9bfef0bf8e680d6d87db98a72d0d1e64871123e56c84474"
	I1210 06:12:18.163476  372888 cri.go:89] found id: "32f373b06842fe1cd3594c2e992c1a68b946614f87ec5014e49bd7b5c6fc9597"
	I1210 06:12:18.163480  372888 cri.go:89] found id: "cc2087138549b52903121596d8e7319535780a3b85708b719e3170b8e961b18c"
	I1210 06:12:18.163484  372888 cri.go:89] found id: "3edcc847365a8156b3cc6d2fd2aefa985fe161b63c86a2c627e7e5d5b700bbb3"
	I1210 06:12:18.163487  372888 cri.go:89] found id: "e9f72b624d9a04a42ed512b19992407ac0e1076bf5f9d035f9f7ea79d5b54533"
	I1210 06:12:18.163490  372888 cri.go:89] found id: "b3d13279bb1f98801f86c5a0bc4f1d1818687f61d834d6108b96a51de639d2ad"
	I1210 06:12:18.163493  372888 cri.go:89] found id: "b11ba380657e90d096767d69b45a8f8c192d3575bd2bb1300a880a5e46be0218"
	I1210 06:12:18.163510  372888 cri.go:89] found id: "7c4c997d687b5587da6adee9ccf8eefbacea6a48c77c823e1449a7aeccb74825"
	I1210 06:12:18.163517  372888 cri.go:89] found id: "7cf2b1b068ab59188e8c5fb5b292a128274569b06b59310a10102bb992ec0ee8"
	I1210 06:12:18.163521  372888 cri.go:89] found id: "ec50b512c8e8c3fe1f2a4815e23cd7b44032538e1b51724eaf4397367fde1033"
	I1210 06:12:18.163524  372888 cri.go:89] found id: "fcb9b12f636ffd5f790e8e1e75b50ea39f36342e5170530163aa43601f9a319f"
	I1210 06:12:18.163527  372888 cri.go:89] found id: "5bf1539b0ce43fe90678001281ab38e24a894f5137e22c376c0ddbda31d3a327"
	I1210 06:12:18.163531  372888 cri.go:89] found id: "ccf62bd56b5d147a9c3fd61622c7c024ac32876116a8d90e388b5ab69b50d5b8"
	I1210 06:12:18.163536  372888 cri.go:89] found id: "c310e24a2efb9842bc09a5d2e631fc5fc9fd71fd3cadbd661c94c8ae96283749"
	I1210 06:12:18.163542  372888 cri.go:89] found id: "75a53210a6a83fc54d04573c114ae7eded588a2bd203d161a9c4410db50b9d2e"
	I1210 06:12:18.163547  372888 cri.go:89] found id: "c0e0a1b2a34abc3e6d822b25b938a7b035f675f6f1f108e9059d4255dfe0edf7"
	I1210 06:12:18.163550  372888 cri.go:89] found id: "ba463a83af075ca14a6d86957600b454f77a0b8e3e6143417cef501be91bfe9f"
	I1210 06:12:18.163554  372888 cri.go:89] found id: "a8f3303a4f28e460a0fd017a0a7f3f9eafd0eab17633ef2587be5f4cc60328af"
	I1210 06:12:18.163559  372888 cri.go:89] found id: "a33aa3e9cb946018d82b23e8a1e365c1f41ac9f964363ec2c49178184540c554"
	I1210 06:12:18.163566  372888 cri.go:89] found id: "ebaecf86934b5d9a9b92a8fc63adbc798eae9b05ae414c28d3d03ee953b04272"
	I1210 06:12:18.163570  372888 cri.go:89] found id: "88969ee781c52aa3a255e538421ad8df15d3990d3042143c168d3f2fdb8acc1f"
	I1210 06:12:18.163575  372888 cri.go:89] found id: ""
	I1210 06:12:18.163629  372888 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:12:18.178189  372888 out.go:203] 
	W1210 06:12:18.181173  372888 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:12:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:12:18.181331  372888 out.go:285] * 
	* 
	W1210 06:12:18.186543  372888 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:12:18.189337  372888 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-241520 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (512.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-253997 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1210 06:19:42.700377  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:21:58.799215  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:22:26.548860  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:23:38.181397  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:23:38.188028  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:23:38.199461  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:23:38.220925  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:23:38.262428  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:23:38.343941  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:23:38.505505  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:23:38.827510  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:23:39.469618  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:23:40.751307  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:23:43.313385  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:23:48.434783  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:23:58.677150  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:19.159023  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:25:00.120430  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:26:22.041888  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:26:58.799220  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-253997 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m30.596459117s)

                                                
                                                
-- stdout --
	* [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-253997" primary control-plane node in "functional-253997" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Found network options:
	  - HTTP_PROXY=localhost:36683
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:36683 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-253997 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-253997 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00025685s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000839678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000839678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-253997 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-253997
helpers_test.go:244: (dbg) docker inspect functional-253997:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	        "Created": "2025-12-10T06:19:33.832297734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 395175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:19:33.914699563Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hosts",
	        "LogPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7-json.log",
	        "Name": "/functional-253997",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-253997:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-253997",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	                "LowerDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-253997",
	                "Source": "/var/lib/docker/volumes/functional-253997/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-253997",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-253997",
	                "name.minikube.sigs.k8s.io": "functional-253997",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "484c42edbd556e17e2c5a836571d3275bc9cbd157b70c100e08a5b73322a62e6",
	            "SandboxKey": "/var/run/docker/netns/484c42edbd55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-253997": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ba:52:c5:6e:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fc5aadc020d5981cfdab05726de722ae81d653cbc30b66014568069657370fe7",
	                    "EndpointID": "9ecd4882ea5c9fe5581b68ad15a0a2f450e34777aee426fcc158008c81076e5a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-253997",
	                        "256e059cdf1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997: exit status 6 (333.690547ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:28:03.466233  401075 status.go:458] kubeconfig endpoint: get endpoint: "functional-253997" does not appear in /home/jenkins/minikube-integration/22094-362392/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-013831 ssh sudo cat /etc/ssl/certs/364265.pem                                                                                        │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /usr/share/ca-certificates/364265.pem                                                                            │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                        │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /etc/ssl/certs/3642652.pem                                                                                       │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /usr/share/ca-certificates/3642652.pem                                                                           │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /etc/test/nested/copy/364265/hosts                                                                               │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                        │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ cp             │ functional-013831 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                              │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh -n functional-013831 sudo cat /home/docker/cp-test.txt                                                                    │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ cp             │ functional-013831 cp functional-013831:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1440438441/001/cp-test.txt                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh -n functional-013831 sudo cat /home/docker/cp-test.txt                                                                    │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ cp             │ functional-013831 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                       │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls --format short --alsologtostderr                                                                                     │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls --format yaml --alsologtostderr                                                                                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh -n functional-013831 sudo cat /tmp/does/not/exist/cp-test.txt                                                             │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh pgrep buildkitd                                                                                                           │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ image          │ functional-013831 image ls --format json --alsologtostderr                                                                                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image build -t localhost/my-image:functional-013831 testdata/build --alsologtostderr                                          │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls --format table --alsologtostderr                                                                                     │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls                                                                                                                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ delete         │ -p functional-013831                                                                                                                            │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start          │ -p functional-253997 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:19:32
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:19:32.571327  394858 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:19:32.571440  394858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:19:32.571444  394858 out.go:374] Setting ErrFile to fd 2...
	I1210 06:19:32.571447  394858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:19:32.571687  394858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:19:32.572078  394858 out.go:368] Setting JSON to false
	I1210 06:19:32.572907  394858 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10925,"bootTime":1765336648,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:19:32.572993  394858 start.go:143] virtualization:  
	I1210 06:19:32.576361  394858 out.go:179] * [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:19:32.579804  394858 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:19:32.579858  394858 notify.go:221] Checking for updates...
	I1210 06:19:32.582595  394858 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:19:32.585471  394858 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:19:32.588358  394858 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:19:32.591113  394858 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:19:32.593956  394858 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:19:32.597233  394858 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:19:32.625342  394858 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:19:32.625480  394858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:19:32.681916  394858 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 06:19:32.671984932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:19:32.682011  394858 docker.go:319] overlay module found
	I1210 06:19:32.686876  394858 out.go:179] * Using the docker driver based on user configuration
	I1210 06:19:32.689690  394858 start.go:309] selected driver: docker
	I1210 06:19:32.689700  394858 start.go:927] validating driver "docker" against <nil>
	I1210 06:19:32.689711  394858 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:19:32.690426  394858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:19:32.745293  394858 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 06:19:32.735839439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:19:32.745440  394858 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:19:32.745662  394858 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:19:32.748456  394858 out.go:179] * Using Docker driver with root privileges
	I1210 06:19:32.751185  394858 cni.go:84] Creating CNI manager for ""
	I1210 06:19:32.751243  394858 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:19:32.751251  394858 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:19:32.751326  394858 start.go:353] cluster config:
	{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:19:32.756197  394858 out.go:179] * Starting "functional-253997" primary control-plane node in "functional-253997" cluster
	I1210 06:19:32.758897  394858 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:19:32.761712  394858 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:19:32.764505  394858 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:19:32.764589  394858 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:19:32.783753  394858 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:19:32.783763  394858 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:19:32.825787  394858 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 06:19:32.998758  394858 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 06:19:32.999044  394858 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:19:32.999141  394858 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/config.json ...
	I1210 06:19:32.999170  394858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/config.json: {Name:mkf34a48862523e9c590cda9f9b89535d4fcfd3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:19:32.999334  394858 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:19:32.999359  394858 start.go:360] acquireMachinesLock for functional-253997: {Name:mkd4a204596bf14d7530a6c5c103756527acdf26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:19:32.999394  394858 start.go:364] duration metric: took 26.929µs to acquireMachinesLock for "functional-253997"
	I1210 06:19:32.999409  394858 start.go:93] Provisioning new machine with config: &{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:19:32.999463  394858 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:19:33.003507  394858 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1210 06:19:33.003906  394858 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:36683 to docker env.
	I1210 06:19:33.004247  394858 start.go:159] libmachine.API.Create for "functional-253997" (driver="docker")
	I1210 06:19:33.004274  394858 client.go:173] LocalClient.Create starting
	I1210 06:19:33.004373  394858 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem
	I1210 06:19:33.004417  394858 main.go:143] libmachine: Decoding PEM data...
	I1210 06:19:33.004434  394858 main.go:143] libmachine: Parsing certificate...
	I1210 06:19:33.004533  394858 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem
	I1210 06:19:33.004554  394858 main.go:143] libmachine: Decoding PEM data...
	I1210 06:19:33.004564  394858 main.go:143] libmachine: Parsing certificate...
	I1210 06:19:33.007896  394858 cli_runner.go:164] Run: docker network inspect functional-253997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:19:33.035954  394858 cli_runner.go:211] docker network inspect functional-253997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:19:33.036035  394858 network_create.go:284] running [docker network inspect functional-253997] to gather additional debugging logs...
	I1210 06:19:33.036051  394858 cli_runner.go:164] Run: docker network inspect functional-253997
	W1210 06:19:33.054815  394858 cli_runner.go:211] docker network inspect functional-253997 returned with exit code 1
	I1210 06:19:33.054836  394858 network_create.go:287] error running [docker network inspect functional-253997]: docker network inspect functional-253997: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-253997 not found
	I1210 06:19:33.054853  394858 network_create.go:289] output of [docker network inspect functional-253997]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-253997 not found
	
	** /stderr **
	I1210 06:19:33.054982  394858 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:19:33.081911  394858 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d4470}
	I1210 06:19:33.081939  394858 network_create.go:124] attempt to create docker network functional-253997 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 06:19:33.082022  394858 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-253997 functional-253997
	I1210 06:19:33.154468  394858 network_create.go:108] docker network functional-253997 192.168.49.0/24 created
	I1210 06:19:33.154503  394858 kic.go:121] calculated static IP "192.168.49.2" for the "functional-253997" container
	I1210 06:19:33.154581  394858 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:19:33.168629  394858 cli_runner.go:164] Run: docker volume create functional-253997 --label name.minikube.sigs.k8s.io=functional-253997 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:19:33.181870  394858 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:19:33.190783  394858 oci.go:103] Successfully created a docker volume functional-253997
	I1210 06:19:33.190865  394858 cli_runner.go:164] Run: docker run --rm --name functional-253997-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-253997 --entrypoint /usr/bin/test -v functional-253997:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:19:33.359398  394858 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:19:33.540977  394858 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:19:33.541084  394858 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:19:33.541092  394858 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 135.493µs
	I1210 06:19:33.541101  394858 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:19:33.541111  394858 cache.go:107] acquiring lock: {Name:mk8250dc655f821bf2674ed77a7683798ede4f4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:19:33.541141  394858 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:19:33.541145  394858 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 35.43µs
	I1210 06:19:33.541150  394858 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:19:33.541158  394858 cache.go:107] acquiring lock: {Name:mk1c0334e51772e4f6b3429cc05b91ec19d54b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:19:33.541196  394858 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:19:33.541202  394858 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 44.497µs
	I1210 06:19:33.541207  394858 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:19:33.541217  394858 cache.go:107] acquiring lock: {Name:mk41aaba55a3296a937cf380d44dc9920df69546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:19:33.541245  394858 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:19:33.541285  394858 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:19:33.541320  394858 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:19:33.541325  394858 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 43.282µs
	I1210 06:19:33.541331  394858 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:19:33.541350  394858 cache.go:107] acquiring lock: {Name:mk9268471d41d450748d4b0133d1a043378776f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:19:33.541379  394858 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:19:33.541383  394858 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 43.553µs
	I1210 06:19:33.541388  394858 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:19:33.541396  394858 cache.go:107] acquiring lock: {Name:mkc2831727d74e5fadb1c00bd89f0236b385200e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:19:33.541420  394858 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:19:33.541435  394858 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 28.85µs
	I1210 06:19:33.541440  394858 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:19:33.541448  394858 cache.go:107] acquiring lock: {Name:mk7e052900e41419dc78d3311f27db508a8f64b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:19:33.541473  394858 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:19:33.541477  394858 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 29.629µs
	I1210 06:19:33.541481  394858 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:19:33.541494  394858 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 277.822µs
	I1210 06:19:33.541499  394858 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:19:33.541508  394858 cache.go:87] Successfully saved all images to host disk.
	I1210 06:19:33.756425  394858 oci.go:107] Successfully prepared a docker volume functional-253997
	I1210 06:19:33.756490  394858 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	W1210 06:19:33.756652  394858 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 06:19:33.756759  394858 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:19:33.817030  394858 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-253997 --name functional-253997 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-253997 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-253997 --network functional-253997 --ip 192.168.49.2 --volume functional-253997:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:19:34.142492  394858 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Running}}
	I1210 06:19:34.164967  394858 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:19:34.184968  394858 cli_runner.go:164] Run: docker exec functional-253997 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:19:34.238383  394858 oci.go:144] the created container "functional-253997" has a running status.
	I1210 06:19:34.238403  394858 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa...
	I1210 06:19:34.765877  394858 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:19:34.792700  394858 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:19:34.810036  394858 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:19:34.810048  394858 kic_runner.go:114] Args: [docker exec --privileged functional-253997 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:19:34.853846  394858 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:19:34.872033  394858 machine.go:94] provisionDockerMachine start ...
	I1210 06:19:34.872245  394858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:19:34.891015  394858 main.go:143] libmachine: Using SSH client type: native
	I1210 06:19:34.891586  394858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:19:34.891604  394858 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:19:34.892278  394858 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42302->127.0.0.1:33159: read: connection reset by peer
	I1210 06:19:38.046215  394858 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:19:38.046231  394858 ubuntu.go:182] provisioning hostname "functional-253997"
	I1210 06:19:38.046308  394858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:19:38.068806  394858 main.go:143] libmachine: Using SSH client type: native
	I1210 06:19:38.069127  394858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:19:38.069136  394858 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-253997 && echo "functional-253997" | sudo tee /etc/hostname
	I1210 06:19:38.234784  394858 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:19:38.234865  394858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:19:38.252880  394858 main.go:143] libmachine: Using SSH client type: native
	I1210 06:19:38.253219  394858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:19:38.253233  394858 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-253997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-253997/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-253997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:19:38.405620  394858 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:19:38.405639  394858 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-362392/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-362392/.minikube}
	I1210 06:19:38.405668  394858 ubuntu.go:190] setting up certificates
	I1210 06:19:38.405676  394858 provision.go:84] configureAuth start
	I1210 06:19:38.405741  394858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:19:38.423320  394858 provision.go:143] copyHostCerts
	I1210 06:19:38.423381  394858 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem, removing ...
	I1210 06:19:38.423389  394858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 06:19:38.423469  394858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem (1078 bytes)
	I1210 06:19:38.423558  394858 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem, removing ...
	I1210 06:19:38.423562  394858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 06:19:38.423586  394858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem (1123 bytes)
	I1210 06:19:38.423638  394858 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem, removing ...
	I1210 06:19:38.423642  394858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 06:19:38.423664  394858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem (1675 bytes)
	I1210 06:19:38.423743  394858 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem org=jenkins.functional-253997 san=[127.0.0.1 192.168.49.2 functional-253997 localhost minikube]
	I1210 06:19:38.661448  394858 provision.go:177] copyRemoteCerts
	I1210 06:19:38.661504  394858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:19:38.661550  394858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:19:38.679471  394858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:19:38.785615  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:19:38.804094  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:19:38.822771  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:19:38.841531  394858 provision.go:87] duration metric: took 435.841532ms to configureAuth
	I1210 06:19:38.841549  394858 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:19:38.841784  394858 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:19:38.841908  394858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:19:38.861705  394858 main.go:143] libmachine: Using SSH client type: native
	I1210 06:19:38.862024  394858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:19:38.862035  394858 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:19:39.168369  394858 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:19:39.168385  394858 machine.go:97] duration metric: took 4.296339444s to provisionDockerMachine
	I1210 06:19:39.168394  394858 client.go:176] duration metric: took 6.164097452s to LocalClient.Create
	I1210 06:19:39.168406  394858 start.go:167] duration metric: took 6.164163783s to libmachine.API.Create "functional-253997"
	I1210 06:19:39.168416  394858 start.go:293] postStartSetup for "functional-253997" (driver="docker")
	I1210 06:19:39.168432  394858 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:19:39.168530  394858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:19:39.168602  394858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:19:39.190052  394858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:19:39.297653  394858 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:19:39.301103  394858 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:19:39.301121  394858 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:19:39.301132  394858 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/addons for local assets ...
	I1210 06:19:39.301210  394858 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/files for local assets ...
	I1210 06:19:39.301304  394858 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> 3642652.pem in /etc/ssl/certs
	I1210 06:19:39.301380  394858 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts -> hosts in /etc/test/nested/copy/364265
	I1210 06:19:39.301429  394858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364265
	I1210 06:19:39.309362  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:19:39.327363  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts --> /etc/test/nested/copy/364265/hosts (40 bytes)
	I1210 06:19:39.346113  394858 start.go:296] duration metric: took 177.684144ms for postStartSetup
	I1210 06:19:39.346502  394858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:19:39.364383  394858 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/config.json ...
	I1210 06:19:39.364656  394858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:19:39.364697  394858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:19:39.382434  394858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:19:39.486353  394858 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:19:39.491527  394858 start.go:128] duration metric: took 6.492051001s to createHost
	I1210 06:19:39.491543  394858 start.go:83] releasing machines lock for "functional-253997", held for 6.492142841s
	I1210 06:19:39.491613  394858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:19:39.512790  394858 out.go:179] * Found network options:
	I1210 06:19:39.515691  394858 out.go:179]   - HTTP_PROXY=localhost:36683
	W1210 06:19:39.518620  394858 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1210 06:19:39.521460  394858 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1210 06:19:39.524361  394858 ssh_runner.go:195] Run: cat /version.json
	I1210 06:19:39.524417  394858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:19:39.524454  394858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:19:39.524509  394858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:19:39.542349  394858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:19:39.550837  394858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:19:39.644882  394858 ssh_runner.go:195] Run: systemctl --version
	I1210 06:19:39.737482  394858 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:19:39.774860  394858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:19:39.779528  394858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:19:39.779608  394858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:19:39.809121  394858 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 06:19:39.809134  394858 start.go:496] detecting cgroup driver to use...
	I1210 06:19:39.809177  394858 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:19:39.809260  394858 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:19:39.828390  394858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:19:39.841405  394858 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:19:39.841476  394858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:19:39.860275  394858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:19:39.882050  394858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:19:40.020071  394858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:19:40.157578  394858 docker.go:234] disabling docker service ...
	I1210 06:19:40.157637  394858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:19:40.184537  394858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:19:40.198833  394858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:19:40.321241  394858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:19:40.433748  394858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:19:40.447834  394858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:19:40.462407  394858 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:19:40.609762  394858 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:19:40.609841  394858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:19:40.619427  394858 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:19:40.619488  394858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:19:40.629068  394858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:19:40.638431  394858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:19:40.647769  394858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:19:40.656663  394858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:19:40.665887  394858 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:19:40.679700  394858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:19:40.689155  394858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:19:40.697319  394858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:19:40.705178  394858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:19:40.817127  394858 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:19:40.983066  394858 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:19:40.983140  394858 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:19:40.987361  394858 start.go:564] Will wait 60s for crictl version
	I1210 06:19:40.987417  394858 ssh_runner.go:195] Run: which crictl
	I1210 06:19:40.991273  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:19:41.022583  394858 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:19:41.022694  394858 ssh_runner.go:195] Run: crio --version
	I1210 06:19:41.051385  394858 ssh_runner.go:195] Run: crio --version
	I1210 06:19:41.085044  394858 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 06:19:41.087751  394858 cli_runner.go:164] Run: docker network inspect functional-253997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:19:41.104309  394858 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:19:41.108397  394858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:19:41.118542  394858 kubeadm.go:884] updating cluster {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:19:41.118703  394858 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:19:41.273350  394858 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:19:41.430761  394858 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:19:41.580963  394858 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:19:41.581046  394858 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:19:41.606825  394858 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1210 06:19:41.606840  394858 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 06:19:41.606893  394858 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:19:41.607128  394858 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:19:41.607228  394858 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:19:41.607312  394858 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:19:41.607411  394858 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:19:41.607516  394858 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 06:19:41.607610  394858 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:19:41.607705  394858 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:19:41.609761  394858 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:19:41.610228  394858 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:19:41.610576  394858 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:19:41.611194  394858 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:19:41.611687  394858 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:19:41.613246  394858 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:19:41.613640  394858 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:19:41.614022  394858 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 06:19:41.908736  394858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:19:41.925925  394858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 06:19:41.947770  394858 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1210 06:19:41.947809  394858 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:19:41.947931  394858 ssh_runner.go:195] Run: which crictl
	I1210 06:19:41.948507  394858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:19:41.949295  394858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:19:41.960911  394858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:19:41.967063  394858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:19:41.973399  394858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1210 06:19:42.013307  394858 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 06:19:42.013342  394858 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 06:19:42.013409  394858 ssh_runner.go:195] Run: which crictl
	I1210 06:19:42.099136  394858 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1210 06:19:42.099171  394858 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:19:42.099228  394858 ssh_runner.go:195] Run: which crictl
	I1210 06:19:42.099307  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:19:42.099365  394858 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1210 06:19:42.099380  394858 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:19:42.099405  394858 ssh_runner.go:195] Run: which crictl
	I1210 06:19:42.099466  394858 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1210 06:19:42.099478  394858 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:19:42.099499  394858 ssh_runner.go:195] Run: which crictl
	I1210 06:19:42.099560  394858 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1210 06:19:42.099573  394858 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:19:42.099593  394858 ssh_runner.go:195] Run: which crictl
	I1210 06:19:42.099661  394858 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1210 06:19:42.099672  394858 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:19:42.099694  394858 ssh_runner.go:195] Run: which crictl
	I1210 06:19:42.099759  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:19:42.150708  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:19:42.150811  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:19:42.150881  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:19:42.150939  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:19:42.150996  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:19:42.151055  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:19:42.151111  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:19:42.270865  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:19:42.270951  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:19:42.271006  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:19:42.271058  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:19:42.271108  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:19:42.271154  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:19:42.271200  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:19:42.388119  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:19:42.388212  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:19:42.388339  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:19:42.388429  394858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 06:19:42.388511  394858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:19:42.388590  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:19:42.388646  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:19:42.388678  394858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 06:19:42.388732  394858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 06:19:42.471346  394858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 06:19:42.471380  394858 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1210 06:19:42.471392  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1210 06:19:42.471569  394858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 06:19:42.471642  394858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:19:42.471666  394858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:19:42.471711  394858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 06:19:42.471758  394858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:19:42.472777  394858 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 06:19:42.472791  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 06:19:42.519194  394858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 06:19:42.519306  394858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:19:42.519374  394858 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1210 06:19:42.519389  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1210 06:19:42.519441  394858 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1210 06:19:42.519450  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1210 06:19:42.519511  394858 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 06:19:42.519519  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1210 06:19:42.519571  394858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1210 06:19:42.519619  394858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:19:42.604908  394858 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 06:19:42.604982  394858 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1210 06:19:42.607357  394858 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1210 06:19:42.607385  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1210 06:19:42.607431  394858 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1210 06:19:42.607443  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	W1210 06:19:42.850484  394858 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 06:19:42.850647  394858 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:19:42.967250  394858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1210 06:19:42.999158  394858 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:19:42.999218  394858 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:19:43.058466  394858 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 06:19:43.058498  394858 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:19:43.058547  394858 ssh_runner.go:195] Run: which crictl
	I1210 06:19:44.626847  394858 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (1.627600945s)
	I1210 06:19:44.626920  394858 ssh_runner.go:235] Completed: which crictl: (1.5683584s)
	I1210 06:19:44.626992  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:19:44.627062  394858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1210 06:19:44.627096  394858 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:19:44.627124  394858 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:19:44.662865  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:19:45.986386  394858 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.359239384s)
	I1210 06:19:45.986405  394858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1210 06:19:45.986422  394858 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:19:45.986437  394858 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.323553228s)
	I1210 06:19:45.986470  394858 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:19:45.986487  394858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:19:47.174510  394858 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.187994535s)
	I1210 06:19:47.174546  394858 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 06:19:47.174611  394858 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.188128567s)
	I1210 06:19:47.174622  394858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 06:19:47.174637  394858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:19:47.174638  394858 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:19:47.174676  394858 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:19:48.325850  394858 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: (1.151154855s)
	I1210 06:19:48.325867  394858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1210 06:19:48.325884  394858 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:19:48.325917  394858 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.151263969s)
	I1210 06:19:48.325930  394858 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:19:48.325938  394858 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 06:19:48.325959  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 06:19:50.107513  394858 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (1.781562273s)
	I1210 06:19:50.107529  394858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1210 06:19:50.107546  394858 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:19:50.107594  394858 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:19:51.433697  394858 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.326077927s)
	I1210 06:19:51.433716  394858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1210 06:19:51.433740  394858 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:19:51.433793  394858 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:19:51.972140  394858 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 06:19:51.972187  394858 cache_images.go:125] Successfully loaded all cached images
	I1210 06:19:51.972192  394858 cache_images.go:94] duration metric: took 10.365339777s to LoadCachedImages
	I1210 06:19:51.972205  394858 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1210 06:19:51.972297  394858 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-253997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:19:51.972422  394858 ssh_runner.go:195] Run: crio config
	I1210 06:19:52.050009  394858 cni.go:84] Creating CNI manager for ""
	I1210 06:19:52.050020  394858 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:19:52.050041  394858 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:19:52.050064  394858 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-253997 NodeName:functional-253997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:19:52.050183  394858 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-253997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:19:52.050253  394858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:19:52.058778  394858 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1210 06:19:52.058851  394858 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:19:52.066925  394858 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1210 06:19:52.066950  394858 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256
	I1210 06:19:52.066992  394858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:19:52.067007  394858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1210 06:19:52.067111  394858 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:19:52.067171  394858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1210 06:19:52.075467  394858 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1210 06:19:52.075492  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1210 06:19:52.086018  394858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1210 06:19:52.086092  394858 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1210 06:19:52.086103  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1210 06:19:52.099884  394858 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1210 06:19:52.099920  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1210 06:19:52.886275  394858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:19:52.895266  394858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 06:19:52.909884  394858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:19:52.924315  394858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1210 06:19:52.939223  394858 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:19:52.944190  394858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:19:52.955353  394858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:19:53.066773  394858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:19:53.083815  394858 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997 for IP: 192.168.49.2
	I1210 06:19:53.083828  394858 certs.go:195] generating shared ca certs ...
	I1210 06:19:53.083844  394858 certs.go:227] acquiring lock for ca certs: {Name:mk6fdc2cadbd147112797261b2432f4e1e90b685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:19:53.083994  394858 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key
	I1210 06:19:53.084033  394858 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key
	I1210 06:19:53.084039  394858 certs.go:257] generating profile certs ...
	I1210 06:19:53.084090  394858 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key
	I1210 06:19:53.084099  394858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt with IP's: []
	I1210 06:19:53.578549  394858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt ...
	I1210 06:19:53.578567  394858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: {Name:mkd6d142709b63f10628ca98a63be7c1fdd01971 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:19:53.578773  394858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key ...
	I1210 06:19:53.578780  394858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key: {Name:mka967a71621dbda062d53deae4dfc2c6dea4d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:19:53.578869  394858 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key.d56e9423
	I1210 06:19:53.578884  394858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt.d56e9423 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 06:19:53.816492  394858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt.d56e9423 ...
	I1210 06:19:53.816508  394858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt.d56e9423: {Name:mk06fc61200f73e62af18a1926546520ac99fb59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:19:53.816692  394858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key.d56e9423 ...
	I1210 06:19:53.816700  394858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key.d56e9423: {Name:mka34440cc42112c484e4f09857a5450d8cfa5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:19:53.816786  394858 certs.go:382] copying /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt.d56e9423 -> /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt
	I1210 06:19:53.816866  394858 certs.go:386] copying /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key.d56e9423 -> /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key
	I1210 06:19:53.816922  394858 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key
	I1210 06:19:53.816934  394858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt with IP's: []
	I1210 06:19:54.751090  394858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt ...
	I1210 06:19:54.751108  394858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt: {Name:mkf3ba5256560d8ddb69aa1cabaebb3869a157b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:19:54.751310  394858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key ...
	I1210 06:19:54.751319  394858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key: {Name:mk32c42bfa8af52412ddf821fbe17aa0df7d2f97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:19:54.751526  394858 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem (1338 bytes)
	W1210 06:19:54.751571  394858 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265_empty.pem, impossibly tiny 0 bytes
	I1210 06:19:54.751579  394858 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:19:54.751603  394858 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:19:54.751626  394858 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:19:54.751657  394858 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem (1675 bytes)
	I1210 06:19:54.751701  394858 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:19:54.752335  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:19:54.773266  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:19:54.796095  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:19:54.815502  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:19:54.834505  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:19:54.853558  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:19:54.872967  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:19:54.892149  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:19:54.912953  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem --> /usr/share/ca-certificates/364265.pem (1338 bytes)
	I1210 06:19:54.934168  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /usr/share/ca-certificates/3642652.pem (1708 bytes)
	I1210 06:19:54.953974  394858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:19:54.972871  394858 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:19:54.986600  394858 ssh_runner.go:195] Run: openssl version
	I1210 06:19:54.993829  394858 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364265.pem
	I1210 06:19:55.003297  394858 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364265.pem /etc/ssl/certs/364265.pem
	I1210 06:19:55.013703  394858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364265.pem
	I1210 06:19:55.019651  394858 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 06:19:55.019732  394858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364265.pem
	I1210 06:19:55.062470  394858 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:19:55.070807  394858 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/364265.pem /etc/ssl/certs/51391683.0
	I1210 06:19:55.079512  394858 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3642652.pem
	I1210 06:19:55.088111  394858 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3642652.pem /etc/ssl/certs/3642652.pem
	I1210 06:19:55.096982  394858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3642652.pem
	I1210 06:19:55.101653  394858 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 06:19:55.101712  394858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3642652.pem
	I1210 06:19:55.143788  394858 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:19:55.152095  394858 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3642652.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:19:55.160680  394858 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:19:55.169138  394858 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:19:55.177625  394858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:19:55.182322  394858 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:19:55.182379  394858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:19:55.224036  394858 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:19:55.232150  394858 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:19:55.240447  394858 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:19:55.244765  394858 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:19:55.244810  394858 kubeadm.go:401] StartCluster: {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:19:55.244876  394858 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:19:55.244942  394858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:19:55.273536  394858 cri.go:89] found id: ""
	I1210 06:19:55.273598  394858 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:19:55.282245  394858 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:19:55.290776  394858 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:19:55.290832  394858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:19:55.299488  394858 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:19:55.299497  394858 kubeadm.go:158] found existing configuration files:
	
	I1210 06:19:55.299558  394858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:19:55.307589  394858 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:19:55.307651  394858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:19:55.315380  394858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:19:55.323983  394858 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:19:55.324062  394858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:19:55.331987  394858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:19:55.340604  394858 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:19:55.340683  394858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:19:55.348826  394858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:19:55.357174  394858 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:19:55.357254  394858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:19:55.365772  394858 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:19:55.415173  394858 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:19:55.415549  394858 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:19:55.492275  394858 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:19:55.492338  394858 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:19:55.492372  394858 kubeadm.go:319] OS: Linux
	I1210 06:19:55.492416  394858 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:19:55.492463  394858 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:19:55.492509  394858 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:19:55.492556  394858 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:19:55.492603  394858 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:19:55.492650  394858 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:19:55.492694  394858 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:19:55.492740  394858 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:19:55.492785  394858 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:19:55.565029  394858 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:19:55.565153  394858 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:19:55.565271  394858 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:19:55.580719  394858 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:19:55.589276  394858 out.go:252]   - Generating certificates and keys ...
	I1210 06:19:55.589368  394858 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:19:55.589431  394858 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:19:55.743600  394858 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:19:56.017044  394858 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:19:56.410817  394858 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:19:56.611007  394858 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:19:57.134263  394858 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:19:57.134558  394858 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-253997 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 06:19:57.222674  394858 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:19:57.222841  394858 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-253997 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 06:19:57.398607  394858 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:19:57.673364  394858 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:19:58.101429  394858 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:19:58.101599  394858 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:19:58.955874  394858 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:19:59.023083  394858 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:19:59.181267  394858 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:19:59.378472  394858 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:19:59.643510  394858 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:19:59.644365  394858 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:19:59.647248  394858 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:19:59.653634  394858 out.go:252]   - Booting up control plane ...
	I1210 06:19:59.653739  394858 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:19:59.653822  394858 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:19:59.653893  394858 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:19:59.668507  394858 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:19:59.668633  394858 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:19:59.678632  394858 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:19:59.678933  394858 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:19:59.679351  394858 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:19:59.812234  394858 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:19:59.812346  394858 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:23:59.812093  394858 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00025685s
	I1210 06:23:59.812127  394858 kubeadm.go:319] 
	I1210 06:23:59.812190  394858 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:23:59.812222  394858 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:23:59.812325  394858 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:23:59.812330  394858 kubeadm.go:319] 
	I1210 06:23:59.812472  394858 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:23:59.812515  394858 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:23:59.812544  394858 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:23:59.812548  394858 kubeadm.go:319] 
	I1210 06:23:59.816001  394858 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:23:59.816413  394858 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:23:59.816520  394858 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:23:59.816754  394858 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:23:59.816758  394858 kubeadm.go:319] 
	I1210 06:23:59.816826  394858 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:23:59.816965  394858 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-253997 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-253997 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00025685s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:23:59.817053  394858 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 06:24:00.286016  394858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:24:00.333065  394858 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:24:00.333172  394858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:24:00.355678  394858 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:24:00.355697  394858 kubeadm.go:158] found existing configuration files:
	
	I1210 06:24:00.355758  394858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:24:00.380498  394858 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:24:00.380568  394858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:24:00.394924  394858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:24:00.412272  394858 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:24:00.412774  394858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:24:00.425067  394858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:24:00.436916  394858 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:24:00.436989  394858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:24:00.459986  394858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:24:00.472397  394858 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:24:00.472479  394858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:24:00.487013  394858 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:24:00.559938  394858 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:24:00.559989  394858 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:24:00.662967  394858 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:24:00.663036  394858 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:24:00.663075  394858 kubeadm.go:319] OS: Linux
	I1210 06:24:00.663119  394858 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:24:00.663172  394858 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:24:00.663235  394858 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:24:00.663282  394858 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:24:00.663329  394858 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:24:00.663377  394858 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:24:00.663422  394858 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:24:00.663468  394858 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:24:00.663514  394858 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:24:00.732250  394858 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:24:00.732355  394858 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:24:00.732445  394858 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:24:00.745599  394858 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:24:00.751029  394858 out.go:252]   - Generating certificates and keys ...
	I1210 06:24:00.751122  394858 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:24:00.751186  394858 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:24:00.751261  394858 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:24:00.751320  394858 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:24:00.751388  394858 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:24:00.751441  394858 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:24:00.751502  394858 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:24:00.751562  394858 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:24:00.751637  394858 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:24:00.751708  394858 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:24:00.751744  394858 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:24:00.751799  394858 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:24:01.019983  394858 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:24:01.308341  394858 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:24:01.689173  394858 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:24:02.030599  394858 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:24:02.477999  394858 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:24:02.478669  394858 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:24:02.481349  394858 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:24:02.484650  394858 out.go:252]   - Booting up control plane ...
	I1210 06:24:02.484754  394858 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:24:02.484832  394858 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:24:02.485925  394858 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:24:02.502960  394858 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:24:02.503066  394858 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:24:02.510974  394858 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:24:02.511438  394858 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:24:02.511663  394858 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:24:02.653650  394858 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:24:02.653765  394858 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:28:02.654030  394858 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000839678s
	I1210 06:28:02.654146  394858 kubeadm.go:319] 
	I1210 06:28:02.654208  394858 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:28:02.654241  394858 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:28:02.654344  394858 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:28:02.654348  394858 kubeadm.go:319] 
	I1210 06:28:02.654451  394858 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:28:02.654482  394858 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:28:02.654511  394858 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:28:02.654514  394858 kubeadm.go:319] 
	I1210 06:28:02.658903  394858 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:28:02.659295  394858 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:28:02.659397  394858 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:28:02.659617  394858 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:28:02.659624  394858 kubeadm.go:319] 
	I1210 06:28:02.659688  394858 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:28:02.659740  394858 kubeadm.go:403] duration metric: took 8m7.414933062s to StartCluster
	I1210 06:28:02.659775  394858 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:28:02.659834  394858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:28:02.691385  394858 cri.go:89] found id: ""
	I1210 06:28:02.691418  394858 logs.go:282] 0 containers: []
	W1210 06:28:02.691428  394858 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:28:02.691435  394858 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:28:02.691520  394858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:28:02.720679  394858 cri.go:89] found id: ""
	I1210 06:28:02.720693  394858 logs.go:282] 0 containers: []
	W1210 06:28:02.720712  394858 logs.go:284] No container was found matching "etcd"
	I1210 06:28:02.720717  394858 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:28:02.720784  394858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:28:02.751059  394858 cri.go:89] found id: ""
	I1210 06:28:02.751073  394858 logs.go:282] 0 containers: []
	W1210 06:28:02.751080  394858 logs.go:284] No container was found matching "coredns"
	I1210 06:28:02.751087  394858 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:28:02.751147  394858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:28:02.778274  394858 cri.go:89] found id: ""
	I1210 06:28:02.778289  394858 logs.go:282] 0 containers: []
	W1210 06:28:02.778296  394858 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:28:02.778302  394858 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:28:02.778371  394858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:28:02.804231  394858 cri.go:89] found id: ""
	I1210 06:28:02.804256  394858 logs.go:282] 0 containers: []
	W1210 06:28:02.804264  394858 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:28:02.804269  394858 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:28:02.804336  394858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:28:02.830716  394858 cri.go:89] found id: ""
	I1210 06:28:02.830740  394858 logs.go:282] 0 containers: []
	W1210 06:28:02.830758  394858 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:28:02.830766  394858 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:28:02.830825  394858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:28:02.856644  394858 cri.go:89] found id: ""
	I1210 06:28:02.856659  394858 logs.go:282] 0 containers: []
	W1210 06:28:02.856666  394858 logs.go:284] No container was found matching "kindnet"
	I1210 06:28:02.856717  394858 logs.go:123] Gathering logs for container status ...
	I1210 06:28:02.856728  394858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:28:02.888346  394858 logs.go:123] Gathering logs for kubelet ...
	I1210 06:28:02.888364  394858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:28:02.954499  394858 logs.go:123] Gathering logs for dmesg ...
	I1210 06:28:02.954519  394858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:28:02.970595  394858 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:28:02.970619  394858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:28:03.038935  394858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:28:03.030151    5509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:28:03.031051    5509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:28:03.032814    5509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:28:03.033556    5509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:28:03.035116    5509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:28:03.030151    5509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:28:03.031051    5509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:28:03.032814    5509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:28:03.033556    5509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:28:03.035116    5509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:28:03.038946  394858 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:28:03.038957  394858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1210 06:28:03.085410  394858 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000839678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:28:03.085457  394858 out.go:285] * 
	W1210 06:28:03.085582  394858 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000839678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:28:03.085634  394858 out.go:285] * 
	W1210 06:28:03.087787  394858 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:28:03.093831  394858 out.go:203] 
	W1210 06:28:03.097554  394858 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000839678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:28:03.097606  394858 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:28:03.097627  394858 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:28:03.101300  394858 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 06:19:42 functional-253997 crio[840]: time="2025-12-10T06:19:42.468684434Z" level=info msg="Image registry.k8s.io/etcd:3.6.6-0 not found" id=6f21eb71-95f8-412a-a8a5-7afc75f9c843 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:42 functional-253997 crio[840]: time="2025-12-10T06:19:42.468720677Z" level=info msg="Neither image nor artfiact registry.k8s.io/etcd:3.6.6-0 found" id=6f21eb71-95f8-412a-a8a5-7afc75f9c843 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:44 functional-253997 crio[840]: time="2025-12-10T06:19:44.659071962Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ada8b686-e7c5-4820-aedf-7af0f9d0cda5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:44 functional-253997 crio[840]: time="2025-12-10T06:19:44.659583919Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=ada8b686-e7c5-4820-aedf-7af0f9d0cda5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:44 functional-253997 crio[840]: time="2025-12-10T06:19:44.659623321Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=ada8b686-e7c5-4820-aedf-7af0f9d0cda5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:44 functional-253997 crio[840]: time="2025-12-10T06:19:44.699146262Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=667705a7-326b-47ab-99ad-ea81368cba49 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:44 functional-253997 crio[840]: time="2025-12-10T06:19:44.69930505Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=667705a7-326b-47ab-99ad-ea81368cba49 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:44 functional-253997 crio[840]: time="2025-12-10T06:19:44.69936229Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=667705a7-326b-47ab-99ad-ea81368cba49 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:46 functional-253997 crio[840]: time="2025-12-10T06:19:46.016675656Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=296b7c49-0348-41d2-bb92-c26a15451b30 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:46 functional-253997 crio[840]: time="2025-12-10T06:19:46.016990417Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=296b7c49-0348-41d2-bb92-c26a15451b30 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:46 functional-253997 crio[840]: time="2025-12-10T06:19:46.017030336Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=296b7c49-0348-41d2-bb92-c26a15451b30 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:55 functional-253997 crio[840]: time="2025-12-10T06:19:55.569098311Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=b03c8e2a-26c0-4ec2-911c-e16d66bd9a9f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:55 functional-253997 crio[840]: time="2025-12-10T06:19:55.572316363Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=197db88e-c3e0-454c-917a-615a6c4d92bd name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:55 functional-253997 crio[840]: time="2025-12-10T06:19:55.574077289Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=f7bcfdcd-c780-4e5f-a852-35b8f51a6da6 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:55 functional-253997 crio[840]: time="2025-12-10T06:19:55.575467158Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=0f8b13d7-d78a-4096-8719-65cf236f43b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:55 functional-253997 crio[840]: time="2025-12-10T06:19:55.576335107Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=c6082c01-2e20-44fc-97d1-bff7dbe71994 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:55 functional-253997 crio[840]: time="2025-12-10T06:19:55.577895505Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d5763e3f-7402-4f2c-81a9-e2e8e823d9ce name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:19:55 functional-253997 crio[840]: time="2025-12-10T06:19:55.578853606Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=632793d2-4163-4f7c-a1e5-b85cfb4be863 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:00 functional-253997 crio[840]: time="2025-12-10T06:24:00.735540844Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=00671922-bc8d-4095-baec-3d479c580b99 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:00 functional-253997 crio[840]: time="2025-12-10T06:24:00.737273879Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=5f041392-9da8-4684-ae94-eb57a2f9d8e0 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:00 functional-253997 crio[840]: time="2025-12-10T06:24:00.73880751Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=74d7507b-ca55-42dc-9b3c-6220ef2fd3cd name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:00 functional-253997 crio[840]: time="2025-12-10T06:24:00.740238657Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=2bf9c276-dbfa-4606-aec1-9d1b708be0b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:00 functional-253997 crio[840]: time="2025-12-10T06:24:00.741298404Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=40fe5241-1573-4ac4-8256-8667ffe7da3e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:00 functional-253997 crio[840]: time="2025-12-10T06:24:00.742689018Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=81a64854-bc42-42fa-8941-81bb2033c1d0 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:00 functional-253997 crio[840]: time="2025-12-10T06:24:00.743606018Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=1ae11f2e-3e4d-41c4-a98d-c87e35096497 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:28:04.130068    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:28:04.131158    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:28:04.132965    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:28:04.133824    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:28:04.135560    5616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 03:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015165] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514331] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032961] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807794] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.310854] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 04:51] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000001f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000bd7dad86{9P.session} n=00000000db0bc116
	[  +0.001142] FS-Cache: O-key=[10] '34323936323939323530'
	[  +0.000773] FS-Cache: N-cookie c=00000020 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000bd7dad86{9P.session} n=00000000040fb2e3
	[  +0.001079] FS-Cache: N-key=[10] '34323936323939323530'
	[Dec10 04:56] hrtimer: interrupt took 8286122 ns
	[Dec10 06:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 06:10] overlayfs: idmapped layers are currently not supported
	[ +21.611451] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 06:15] overlayfs: idmapped layers are currently not supported
	[Dec10 06:16] overlayfs: idmapped layers are currently not supported
	[Dec10 06:19] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:28:04 up  3:10,  0 user,  load average: 0.16, 0.56, 1.14
	Linux functional-253997 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:28:01 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:28:02 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 10 06:28:02 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:28:02 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:28:02 functional-253997 kubelet[5429]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:28:02 functional-253997 kubelet[5429]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:28:02 functional-253997 kubelet[5429]: E1210 06:28:02.698495    5429 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:28:02 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:28:02 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:28:03 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 649.
	Dec 10 06:28:03 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:28:03 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:28:03 functional-253997 kubelet[5528]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:28:03 functional-253997 kubelet[5528]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:28:03 functional-253997 kubelet[5528]: E1210 06:28:03.447534    5528 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:28:03 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:28:03 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:28:04 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 10 06:28:04 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:28:04 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:28:04 functional-253997 kubelet[5620]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:28:04 functional-253997 kubelet[5620]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:28:04 functional-253997 kubelet[5620]: E1210 06:28:04.186517    5620 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:28:04 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:28:04 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997: exit status 6 (342.963532ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:28:04.627795  401293 status.go:458] kubeconfig endpoint: get endpoint: "functional-253997" does not appear in /home/jenkins/minikube-integration/22094-362392/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-253997" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (512.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (369.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1210 06:28:04.645588  364265 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-253997 --alsologtostderr -v=8
E1210 06:28:38.175486  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:29:05.883353  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:31:58.799551  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:33:21.910492  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:33:38.175981  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-253997 --alsologtostderr -v=8: exit status 80 (6m6.757855711s)

                                                
                                                
-- stdout --
	* [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-253997" primary control-plane node in "functional-253997" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:28:04.696682  401365 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:28:04.696859  401365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:28:04.696892  401365 out.go:374] Setting ErrFile to fd 2...
	I1210 06:28:04.696914  401365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:28:04.697215  401365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:28:04.697662  401365 out.go:368] Setting JSON to false
	I1210 06:28:04.698567  401365 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11437,"bootTime":1765336648,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:28:04.698673  401365 start.go:143] virtualization:  
	I1210 06:28:04.702443  401365 out.go:179] * [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:28:04.705481  401365 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:28:04.705615  401365 notify.go:221] Checking for updates...
	I1210 06:28:04.711086  401365 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:28:04.713917  401365 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:04.716867  401365 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:28:04.719925  401365 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:28:04.722835  401365 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:28:04.726336  401365 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:28:04.726469  401365 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:28:04.754166  401365 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:28:04.754279  401365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:28:04.810645  401365 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:28:04.801435563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:28:04.810756  401365 docker.go:319] overlay module found
	I1210 06:28:04.813864  401365 out.go:179] * Using the docker driver based on existing profile
	I1210 06:28:04.816769  401365 start.go:309] selected driver: docker
	I1210 06:28:04.816791  401365 start.go:927] validating driver "docker" against &{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:28:04.816907  401365 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:28:04.817028  401365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:28:04.870143  401365 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:28:04.860525891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:28:04.870593  401365 cni.go:84] Creating CNI manager for ""
	I1210 06:28:04.870644  401365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:28:04.870692  401365 start.go:353] cluster config:
	{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:28:04.873854  401365 out.go:179] * Starting "functional-253997" primary control-plane node in "functional-253997" cluster
	I1210 06:28:04.876935  401365 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:28:04.879860  401365 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:28:04.882747  401365 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:28:04.882931  401365 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:28:04.906679  401365 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:28:04.906698  401365 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:28:04.939349  401365 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 06:28:05.106989  401365 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 06:28:05.107216  401365 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/config.json ...
	I1210 06:28:05.107505  401365 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:28:05.107566  401365 start.go:360] acquireMachinesLock for functional-253997: {Name:mkd4a204596bf14d7530a6c5c103756527acdf26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.107643  401365 start.go:364] duration metric: took 39.278µs to acquireMachinesLock for "functional-253997"
	I1210 06:28:05.107681  401365 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:28:05.107701  401365 fix.go:54] fixHost starting: 
	I1210 06:28:05.107821  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:05.108032  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:05.134635  401365 fix.go:112] recreateIfNeeded on functional-253997: state=Running err=<nil>
	W1210 06:28:05.134664  401365 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:28:05.138161  401365 out.go:252] * Updating the running docker "functional-253997" container ...
	I1210 06:28:05.138204  401365 machine.go:94] provisionDockerMachine start ...
	I1210 06:28:05.138290  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.156912  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:05.157271  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:05.157282  401365 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:28:05.272681  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:05.312543  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:28:05.312568  401365 ubuntu.go:182] provisioning hostname "functional-253997"
	I1210 06:28:05.312643  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.337102  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:05.337416  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:05.337433  401365 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-253997 && echo "functional-253997" | sudo tee /etc/hostname
	I1210 06:28:05.435781  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:05.503700  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:28:05.503808  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.525010  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:05.525371  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:05.525395  401365 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-253997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-253997/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-253997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:28:05.596990  401365 cache.go:107] acquiring lock: {Name:mk9268471d41d450748d4b0133d1a043378776f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597093  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:28:05.597107  401365 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 135.879µs
	I1210 06:28:05.597123  401365 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597148  401365 cache.go:107] acquiring lock: {Name:mk8250dc655f821bf2674ed77a7683798ede4f4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597196  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:28:05.597205  401365 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 71.098µs
	I1210 06:28:05.597212  401365 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597224  401365 cache.go:107] acquiring lock: {Name:mk1c0334e51772e4f6b3429cc05b91ec19d54b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597256  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:28:05.597264  401365 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 41.773µs
	I1210 06:28:05.597271  401365 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597286  401365 cache.go:107] acquiring lock: {Name:mk41aaba55a3296a937cf380d44dc9920df69546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597313  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:28:05.597325  401365 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 45.342µs
	I1210 06:28:05.597331  401365 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597347  401365 cache.go:107] acquiring lock: {Name:mkc2831727d74e5fadb1c00bd89f0236b385200e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597380  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:28:05.597390  401365 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 49.009µs
	I1210 06:28:05.597395  401365 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:28:05.597404  401365 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597432  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:28:05.597441  401365 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 38.597µs
	I1210 06:28:05.597447  401365 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:28:05.597457  401365 cache.go:107] acquiring lock: {Name:mk7e052900e41419dc78d3311f27db508a8f64b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597487  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:28:05.597494  401365 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.163µs
	I1210 06:28:05.597499  401365 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:28:05.597517  401365 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597571  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:28:05.597584  401365 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.023µs
	I1210 06:28:05.597591  401365 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:28:05.597598  401365 cache.go:87] Successfully saved all images to host disk.
	I1210 06:28:05.681682  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:28:05.681708  401365 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-362392/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-362392/.minikube}
	I1210 06:28:05.681741  401365 ubuntu.go:190] setting up certificates
	I1210 06:28:05.681752  401365 provision.go:84] configureAuth start
	I1210 06:28:05.681819  401365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:28:05.699808  401365 provision.go:143] copyHostCerts
	I1210 06:28:05.699863  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 06:28:05.699905  401365 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem, removing ...
	I1210 06:28:05.699919  401365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 06:28:05.699992  401365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem (1078 bytes)
	I1210 06:28:05.700081  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 06:28:05.700104  401365 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem, removing ...
	I1210 06:28:05.700113  401365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 06:28:05.700142  401365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem (1123 bytes)
	I1210 06:28:05.700188  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 06:28:05.700207  401365 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem, removing ...
	I1210 06:28:05.700218  401365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 06:28:05.700242  401365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem (1675 bytes)
	I1210 06:28:05.700300  401365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem org=jenkins.functional-253997 san=[127.0.0.1 192.168.49.2 functional-253997 localhost minikube]
	I1210 06:28:05.936274  401365 provision.go:177] copyRemoteCerts
	I1210 06:28:05.936350  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:28:05.936418  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.954560  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.065031  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 06:28:06.065092  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:28:06.082556  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 06:28:06.082620  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:28:06.101057  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 06:28:06.101135  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:28:06.119676  401365 provision.go:87] duration metric: took 437.892883ms to configureAuth
	I1210 06:28:06.119777  401365 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:28:06.119980  401365 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:28:06.120085  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.137920  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:06.138235  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:06.138256  401365 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:28:06.452845  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:28:06.452929  401365 machine.go:97] duration metric: took 1.314715304s to provisionDockerMachine
	I1210 06:28:06.452956  401365 start.go:293] postStartSetup for "functional-253997" (driver="docker")
	I1210 06:28:06.452990  401365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:28:06.453063  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:28:06.453144  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.470784  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.577269  401365 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:28:06.580692  401365 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 06:28:06.580715  401365 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 06:28:06.580720  401365 command_runner.go:130] > VERSION_ID="12"
	I1210 06:28:06.580725  401365 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 06:28:06.580730  401365 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 06:28:06.580768  401365 command_runner.go:130] > ID=debian
	I1210 06:28:06.580780  401365 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 06:28:06.580785  401365 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 06:28:06.580791  401365 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 06:28:06.580887  401365 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:28:06.580933  401365 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:28:06.580952  401365 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/addons for local assets ...
	I1210 06:28:06.581012  401365 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/files for local assets ...
	I1210 06:28:06.581098  401365 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> 3642652.pem in /etc/ssl/certs
	I1210 06:28:06.581111  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> /etc/ssl/certs/3642652.pem
	I1210 06:28:06.581203  401365 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts -> hosts in /etc/test/nested/copy/364265
	I1210 06:28:06.581211  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts -> /etc/test/nested/copy/364265/hosts
	I1210 06:28:06.581307  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364265
	I1210 06:28:06.588834  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:28:06.607350  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts --> /etc/test/nested/copy/364265/hosts (40 bytes)
	I1210 06:28:06.625111  401365 start.go:296] duration metric: took 172.118023ms for postStartSetup
	I1210 06:28:06.625251  401365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:28:06.625310  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.643314  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.746089  401365 command_runner.go:130] > 11%
	I1210 06:28:06.746641  401365 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:28:06.751190  401365 command_runner.go:130] > 174G
	I1210 06:28:06.751596  401365 fix.go:56] duration metric: took 1.643890859s for fixHost
	I1210 06:28:06.751620  401365 start.go:83] releasing machines lock for "functional-253997", held for 1.643948944s
	I1210 06:28:06.751695  401365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:28:06.769599  401365 ssh_runner.go:195] Run: cat /version.json
	I1210 06:28:06.769653  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.769923  401365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:28:06.769973  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.794205  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.801527  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.995023  401365 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 06:28:06.995129  401365 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1210 06:28:06.995269  401365 ssh_runner.go:195] Run: systemctl --version
	I1210 06:28:07.001581  401365 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 06:28:07.001629  401365 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 06:28:07.002099  401365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:28:07.048284  401365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 06:28:07.052994  401365 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 06:28:07.053661  401365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:28:07.053769  401365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:28:07.062754  401365 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:28:07.062818  401365 start.go:496] detecting cgroup driver to use...
	I1210 06:28:07.062869  401365 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:28:07.062946  401365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:28:07.079107  401365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:28:07.094803  401365 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:28:07.094958  401365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:28:07.114470  401365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:28:07.128193  401365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:28:07.258424  401365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:28:07.374265  401365 docker.go:234] disabling docker service ...
	I1210 06:28:07.374339  401365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:28:07.389285  401365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:28:07.403201  401365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:28:07.521904  401365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:28:07.641023  401365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:28:07.653771  401365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:28:07.666535  401365 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1210 06:28:07.667719  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:07.817082  401365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:28:07.817158  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.826426  401365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:28:07.826509  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.835611  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.844530  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.853511  401365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:28:07.861378  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.870726  401365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.879012  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.888039  401365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:28:07.894740  401365 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 06:28:07.895767  401365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:28:07.903878  401365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:28:08.028500  401365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:28:08.203883  401365 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:28:08.204004  401365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:28:08.207826  401365 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1210 06:28:08.207850  401365 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 06:28:08.207858  401365 command_runner.go:130] > Device: 0,72	Inode: 1753        Links: 1
	I1210 06:28:08.207864  401365 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:28:08.207869  401365 command_runner.go:130] > Access: 2025-12-10 06:28:08.143329752 +0000
	I1210 06:28:08.207875  401365 command_runner.go:130] > Modify: 2025-12-10 06:28:08.143329752 +0000
	I1210 06:28:08.207879  401365 command_runner.go:130] > Change: 2025-12-10 06:28:08.143329752 +0000
	I1210 06:28:08.207883  401365 command_runner.go:130] >  Birth: -
	I1210 06:28:08.207920  401365 start.go:564] Will wait 60s for crictl version
	I1210 06:28:08.207972  401365 ssh_runner.go:195] Run: which crictl
	I1210 06:28:08.211603  401365 command_runner.go:130] > /usr/local/bin/crictl
	I1210 06:28:08.211673  401365 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:28:08.233344  401365 command_runner.go:130] > Version:  0.1.0
	I1210 06:28:08.233366  401365 command_runner.go:130] > RuntimeName:  cri-o
	I1210 06:28:08.233371  401365 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1210 06:28:08.233486  401365 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 06:28:08.235784  401365 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:28:08.235868  401365 ssh_runner.go:195] Run: crio --version
	I1210 06:28:08.263554  401365 command_runner.go:130] > crio version 1.34.3
	I1210 06:28:08.263582  401365 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 06:28:08.263590  401365 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 06:28:08.263598  401365 command_runner.go:130] >    GitTreeState:   dirty
	I1210 06:28:08.263603  401365 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 06:28:08.263609  401365 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 06:28:08.263614  401365 command_runner.go:130] >    Compiler:       gc
	I1210 06:28:08.263618  401365 command_runner.go:130] >    Platform:       linux/arm64
	I1210 06:28:08.263625  401365 command_runner.go:130] >    Linkmode:       static
	I1210 06:28:08.263631  401365 command_runner.go:130] >    BuildTags:
	I1210 06:28:08.263635  401365 command_runner.go:130] >      static
	I1210 06:28:08.263641  401365 command_runner.go:130] >      netgo
	I1210 06:28:08.263644  401365 command_runner.go:130] >      osusergo
	I1210 06:28:08.263649  401365 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 06:28:08.263658  401365 command_runner.go:130] >      seccomp
	I1210 06:28:08.263662  401365 command_runner.go:130] >      apparmor
	I1210 06:28:08.263665  401365 command_runner.go:130] >      selinux
	I1210 06:28:08.263673  401365 command_runner.go:130] >    LDFlags:          unknown
	I1210 06:28:08.263678  401365 command_runner.go:130] >    SeccompEnabled:   true
	I1210 06:28:08.263686  401365 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 06:28:08.265277  401365 ssh_runner.go:195] Run: crio --version
	I1210 06:28:08.292854  401365 command_runner.go:130] > crio version 1.34.3
	I1210 06:28:08.292877  401365 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 06:28:08.292884  401365 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 06:28:08.292894  401365 command_runner.go:130] >    GitTreeState:   dirty
	I1210 06:28:08.292899  401365 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 06:28:08.292903  401365 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 06:28:08.292909  401365 command_runner.go:130] >    Compiler:       gc
	I1210 06:28:08.292914  401365 command_runner.go:130] >    Platform:       linux/arm64
	I1210 06:28:08.292918  401365 command_runner.go:130] >    Linkmode:       static
	I1210 06:28:08.292921  401365 command_runner.go:130] >    BuildTags:
	I1210 06:28:08.292925  401365 command_runner.go:130] >      static
	I1210 06:28:08.292929  401365 command_runner.go:130] >      netgo
	I1210 06:28:08.292932  401365 command_runner.go:130] >      osusergo
	I1210 06:28:08.292936  401365 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 06:28:08.292939  401365 command_runner.go:130] >      seccomp
	I1210 06:28:08.292943  401365 command_runner.go:130] >      apparmor
	I1210 06:28:08.292947  401365 command_runner.go:130] >      selinux
	I1210 06:28:08.292951  401365 command_runner.go:130] >    LDFlags:          unknown
	I1210 06:28:08.292955  401365 command_runner.go:130] >    SeccompEnabled:   true
	I1210 06:28:08.292959  401365 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 06:28:08.297960  401365 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 06:28:08.300955  401365 cli_runner.go:164] Run: docker network inspect functional-253997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:28:08.316701  401365 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:28:08.320890  401365 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 06:28:08.321107  401365 kubeadm.go:884] updating cluster {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:28:08.321383  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:08.467539  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:08.630219  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:08.778675  401365 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:28:08.778770  401365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:28:08.809702  401365 command_runner.go:130] > {
	I1210 06:28:08.809721  401365 command_runner.go:130] >   "images":  [
	I1210 06:28:08.809725  401365 command_runner.go:130] >     {
	I1210 06:28:08.809734  401365 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1210 06:28:08.809739  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809744  401365 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 06:28:08.809748  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809753  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809762  401365 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1210 06:28:08.809765  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809770  401365 command_runner.go:130] >       "size":  "29035622",
	I1210 06:28:08.809784  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.809789  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809792  401365 command_runner.go:130] >     },
	I1210 06:28:08.809795  401365 command_runner.go:130] >     {
	I1210 06:28:08.809802  401365 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 06:28:08.809806  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809812  401365 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 06:28:08.809815  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809819  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809827  401365 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1210 06:28:08.809830  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809834  401365 command_runner.go:130] >       "size":  "74488375",
	I1210 06:28:08.809839  401365 command_runner.go:130] >       "username":  "nonroot",
	I1210 06:28:08.809843  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809846  401365 command_runner.go:130] >     },
	I1210 06:28:08.809850  401365 command_runner.go:130] >     {
	I1210 06:28:08.809856  401365 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1210 06:28:08.809860  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809865  401365 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1210 06:28:08.809868  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809872  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809882  401365 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:c711b5adb8fed0c7e2a62a6c327fd0f8486f90b93c1ebd0ba1e790b930373aae"
	I1210 06:28:08.809885  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809889  401365 command_runner.go:130] >       "size":  "60849030",
	I1210 06:28:08.809893  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.809897  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.809900  401365 command_runner.go:130] >       },
	I1210 06:28:08.809904  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.809908  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809911  401365 command_runner.go:130] >     },
	I1210 06:28:08.809915  401365 command_runner.go:130] >     {
	I1210 06:28:08.809921  401365 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1210 06:28:08.809925  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809934  401365 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1210 06:28:08.809938  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809941  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809949  401365 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:112cff968330968b8d8cc75dd9f232b5b048067a2151629e56b80ff7af621b72"
	I1210 06:28:08.809954  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809958  401365 command_runner.go:130] >       "size":  "85012778",
	I1210 06:28:08.809961  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.809965  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.809968  401365 command_runner.go:130] >       },
	I1210 06:28:08.809973  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.809977  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809980  401365 command_runner.go:130] >     },
	I1210 06:28:08.809983  401365 command_runner.go:130] >     {
	I1210 06:28:08.809989  401365 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1210 06:28:08.809994  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809999  401365 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1210 06:28:08.810002  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810006  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810014  401365 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:89a8e28214a1c4b4631281e37feaf54299e7510d43c5445d4adc88575390f71e"
	I1210 06:28:08.810017  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810021  401365 command_runner.go:130] >       "size":  "72167568",
	I1210 06:28:08.810030  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.810035  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.810038  401365 command_runner.go:130] >       },
	I1210 06:28:08.810042  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810046  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.810049  401365 command_runner.go:130] >     },
	I1210 06:28:08.810052  401365 command_runner.go:130] >     {
	I1210 06:28:08.810058  401365 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1210 06:28:08.810062  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.810068  401365 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1210 06:28:08.810072  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810076  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810086  401365 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:01f8409e5c04cbb256f29dc118ff184a24b9cbb97c7f6d3bd462333b366566ca"
	I1210 06:28:08.810089  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810093  401365 command_runner.go:130] >       "size":  "74105636",
	I1210 06:28:08.810097  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810101  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.810104  401365 command_runner.go:130] >     },
	I1210 06:28:08.810107  401365 command_runner.go:130] >     {
	I1210 06:28:08.810114  401365 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1210 06:28:08.810117  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.810127  401365 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1210 06:28:08.810131  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810134  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810144  401365 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9a2a4c91f5fdaf1a3c5e839f425cc7461b9e24d5f0816c207638df25ade3eda9"
	I1210 06:28:08.810147  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810151  401365 command_runner.go:130] >       "size":  "49819792",
	I1210 06:28:08.810154  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.810158  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.810160  401365 command_runner.go:130] >       },
	I1210 06:28:08.810165  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810169  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.810172  401365 command_runner.go:130] >     },
	I1210 06:28:08.810175  401365 command_runner.go:130] >     {
	I1210 06:28:08.810181  401365 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 06:28:08.810185  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.810189  401365 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 06:28:08.810192  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810196  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810203  401365 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1210 06:28:08.810206  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810210  401365 command_runner.go:130] >       "size":  "517328",
	I1210 06:28:08.810213  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.810217  401365 command_runner.go:130] >         "value":  "65535"
	I1210 06:28:08.810220  401365 command_runner.go:130] >       },
	I1210 06:28:08.810228  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810232  401365 command_runner.go:130] >       "pinned":  true
	I1210 06:28:08.810234  401365 command_runner.go:130] >     }
	I1210 06:28:08.810237  401365 command_runner.go:130] >   ]
	I1210 06:28:08.810240  401365 command_runner.go:130] > }
	I1210 06:28:08.812152  401365 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:28:08.812177  401365 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:28:08.812185  401365 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1210 06:28:08.812284  401365 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-253997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:28:08.812367  401365 ssh_runner.go:195] Run: crio config
	I1210 06:28:08.860605  401365 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1210 06:28:08.860628  401365 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1210 06:28:08.860635  401365 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1210 06:28:08.860638  401365 command_runner.go:130] > #
	I1210 06:28:08.860654  401365 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1210 06:28:08.860661  401365 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1210 06:28:08.860668  401365 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1210 06:28:08.860677  401365 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1210 06:28:08.860680  401365 command_runner.go:130] > # reload'.
	I1210 06:28:08.860687  401365 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1210 06:28:08.860694  401365 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1210 06:28:08.860700  401365 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1210 06:28:08.860706  401365 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1210 06:28:08.860709  401365 command_runner.go:130] > [crio]
	I1210 06:28:08.860716  401365 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1210 06:28:08.860721  401365 command_runner.go:130] > # containers images, in this directory.
	I1210 06:28:08.860730  401365 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1210 06:28:08.860737  401365 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1210 06:28:08.860742  401365 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1210 06:28:08.860760  401365 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1210 06:28:08.860811  401365 command_runner.go:130] > # imagestore = ""
	I1210 06:28:08.860819  401365 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1210 06:28:08.860826  401365 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1210 06:28:08.860837  401365 command_runner.go:130] > # storage_driver = "overlay"
	I1210 06:28:08.860843  401365 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1210 06:28:08.860850  401365 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1210 06:28:08.860853  401365 command_runner.go:130] > # storage_option = [
	I1210 06:28:08.860857  401365 command_runner.go:130] > # ]
	I1210 06:28:08.860864  401365 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1210 06:28:08.860870  401365 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1210 06:28:08.860874  401365 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1210 06:28:08.860880  401365 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1210 06:28:08.860886  401365 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1210 06:28:08.860890  401365 command_runner.go:130] > # always happen on a node reboot
	I1210 06:28:08.860894  401365 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1210 06:28:08.860905  401365 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1210 06:28:08.860911  401365 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1210 06:28:08.860918  401365 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1210 06:28:08.860922  401365 command_runner.go:130] > # version_file_persist = ""
	I1210 06:28:08.860930  401365 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1210 06:28:08.860938  401365 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1210 06:28:08.860941  401365 command_runner.go:130] > # internal_wipe = true
	I1210 06:28:08.860950  401365 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1210 06:28:08.860955  401365 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1210 06:28:08.860959  401365 command_runner.go:130] > # internal_repair = true
	I1210 06:28:08.860964  401365 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1210 06:28:08.860971  401365 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1210 06:28:08.860976  401365 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1210 06:28:08.860981  401365 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1210 06:28:08.860987  401365 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1210 06:28:08.860991  401365 command_runner.go:130] > [crio.api]
	I1210 06:28:08.860997  401365 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1210 06:28:08.861001  401365 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1210 06:28:08.861006  401365 command_runner.go:130] > # IP address on which the stream server will listen.
	I1210 06:28:08.861010  401365 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1210 06:28:08.861017  401365 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1210 06:28:08.861026  401365 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1210 06:28:08.861030  401365 command_runner.go:130] > # stream_port = "0"
	I1210 06:28:08.861035  401365 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1210 06:28:08.861040  401365 command_runner.go:130] > # stream_enable_tls = false
	I1210 06:28:08.861046  401365 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1210 06:28:08.861050  401365 command_runner.go:130] > # stream_idle_timeout = ""
	I1210 06:28:08.861056  401365 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1210 06:28:08.861062  401365 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1210 06:28:08.861066  401365 command_runner.go:130] > # stream_tls_cert = ""
	I1210 06:28:08.861072  401365 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1210 06:28:08.861077  401365 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1210 06:28:08.861081  401365 command_runner.go:130] > # stream_tls_key = ""
	I1210 06:28:08.861087  401365 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1210 06:28:08.861093  401365 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1210 06:28:08.861097  401365 command_runner.go:130] > # automatically pick up the changes.
	I1210 06:28:08.861446  401365 command_runner.go:130] > # stream_tls_ca = ""
	I1210 06:28:08.861478  401365 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 06:28:08.861569  401365 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1210 06:28:08.861581  401365 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 06:28:08.861586  401365 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1210 06:28:08.861593  401365 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1210 06:28:08.861599  401365 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1210 06:28:08.861602  401365 command_runner.go:130] > [crio.runtime]
	I1210 06:28:08.861609  401365 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1210 06:28:08.861614  401365 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1210 06:28:08.861628  401365 command_runner.go:130] > # "nofile=1024:2048"
	I1210 06:28:08.861634  401365 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1210 06:28:08.861638  401365 command_runner.go:130] > # default_ulimits = [
	I1210 06:28:08.861653  401365 command_runner.go:130] > # ]
	I1210 06:28:08.861660  401365 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1210 06:28:08.861663  401365 command_runner.go:130] > # no_pivot = false
	I1210 06:28:08.861669  401365 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1210 06:28:08.861675  401365 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1210 06:28:08.861681  401365 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1210 06:28:08.861687  401365 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1210 06:28:08.861696  401365 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1210 06:28:08.861703  401365 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 06:28:08.861707  401365 command_runner.go:130] > # conmon = ""
	I1210 06:28:08.861711  401365 command_runner.go:130] > # Cgroup setting for conmon
	I1210 06:28:08.861718  401365 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1210 06:28:08.861722  401365 command_runner.go:130] > conmon_cgroup = "pod"
	I1210 06:28:08.861728  401365 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1210 06:28:08.861733  401365 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1210 06:28:08.861740  401365 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 06:28:08.861744  401365 command_runner.go:130] > # conmon_env = [
	I1210 06:28:08.861747  401365 command_runner.go:130] > # ]
	I1210 06:28:08.861753  401365 command_runner.go:130] > # Additional environment variables to set for all the
	I1210 06:28:08.861758  401365 command_runner.go:130] > # containers. These are overridden if set in the
	I1210 06:28:08.861764  401365 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1210 06:28:08.861768  401365 command_runner.go:130] > # default_env = [
	I1210 06:28:08.861771  401365 command_runner.go:130] > # ]
	I1210 06:28:08.861787  401365 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1210 06:28:08.861795  401365 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1210 06:28:08.861799  401365 command_runner.go:130] > # selinux = false
	I1210 06:28:08.861809  401365 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1210 06:28:08.861817  401365 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1210 06:28:08.861823  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862101  401365 command_runner.go:130] > # seccomp_profile = ""
	I1210 06:28:08.862113  401365 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1210 06:28:08.862119  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862201  401365 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1210 06:28:08.862211  401365 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1210 06:28:08.862225  401365 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1210 06:28:08.862232  401365 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1210 06:28:08.862239  401365 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1210 06:28:08.862244  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862248  401365 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1210 06:28:08.862254  401365 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1210 06:28:08.862259  401365 command_runner.go:130] > # the cgroup blockio controller.
	I1210 06:28:08.862263  401365 command_runner.go:130] > # blockio_config_file = ""
	I1210 06:28:08.862273  401365 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1210 06:28:08.862283  401365 command_runner.go:130] > # blockio parameters.
	I1210 06:28:08.862294  401365 command_runner.go:130] > # blockio_reload = false
	I1210 06:28:08.862301  401365 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1210 06:28:08.862304  401365 command_runner.go:130] > # irqbalance daemon.
	I1210 06:28:08.862310  401365 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1210 06:28:08.862316  401365 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1210 06:28:08.862323  401365 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1210 06:28:08.862330  401365 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1210 06:28:08.862336  401365 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1210 06:28:08.862342  401365 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1210 06:28:08.862347  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862351  401365 command_runner.go:130] > # rdt_config_file = ""
	I1210 06:28:08.862356  401365 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1210 06:28:08.862384  401365 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1210 06:28:08.862391  401365 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1210 06:28:08.862666  401365 command_runner.go:130] > # separate_pull_cgroup = ""
	I1210 06:28:08.862678  401365 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1210 06:28:08.862685  401365 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1210 06:28:08.862689  401365 command_runner.go:130] > # will be added.
	I1210 06:28:08.862693  401365 command_runner.go:130] > # default_capabilities = [
	I1210 06:28:08.862777  401365 command_runner.go:130] > # 	"CHOWN",
	I1210 06:28:08.862786  401365 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1210 06:28:08.862797  401365 command_runner.go:130] > # 	"FSETID",
	I1210 06:28:08.862802  401365 command_runner.go:130] > # 	"FOWNER",
	I1210 06:28:08.862806  401365 command_runner.go:130] > # 	"SETGID",
	I1210 06:28:08.862809  401365 command_runner.go:130] > # 	"SETUID",
	I1210 06:28:08.862838  401365 command_runner.go:130] > # 	"SETPCAP",
	I1210 06:28:08.862844  401365 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1210 06:28:08.862847  401365 command_runner.go:130] > # 	"KILL",
	I1210 06:28:08.862850  401365 command_runner.go:130] > # ]
	I1210 06:28:08.862858  401365 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1210 06:28:08.862865  401365 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1210 06:28:08.863095  401365 command_runner.go:130] > # add_inheritable_capabilities = false
	I1210 06:28:08.863106  401365 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1210 06:28:08.863112  401365 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 06:28:08.863116  401365 command_runner.go:130] > default_sysctls = [
	I1210 06:28:08.863203  401365 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1210 06:28:08.863243  401365 command_runner.go:130] > ]
	I1210 06:28:08.863252  401365 command_runner.go:130] > # List of devices on the host that a
	I1210 06:28:08.863259  401365 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1210 06:28:08.863263  401365 command_runner.go:130] > # allowed_devices = [
	I1210 06:28:08.863314  401365 command_runner.go:130] > # 	"/dev/fuse",
	I1210 06:28:08.863326  401365 command_runner.go:130] > # 	"/dev/net/tun",
	I1210 06:28:08.863333  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863338  401365 command_runner.go:130] > # List of additional devices. specified as
	I1210 06:28:08.863345  401365 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1210 06:28:08.863351  401365 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1210 06:28:08.863357  401365 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 06:28:08.863361  401365 command_runner.go:130] > # additional_devices = [
	I1210 06:28:08.863363  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863368  401365 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1210 06:28:08.863372  401365 command_runner.go:130] > # cdi_spec_dirs = [
	I1210 06:28:08.863376  401365 command_runner.go:130] > # 	"/etc/cdi",
	I1210 06:28:08.863379  401365 command_runner.go:130] > # 	"/var/run/cdi",
	I1210 06:28:08.863382  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863388  401365 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1210 06:28:08.863394  401365 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1210 06:28:08.863398  401365 command_runner.go:130] > # Defaults to false.
	I1210 06:28:08.863403  401365 command_runner.go:130] > # device_ownership_from_security_context = false
	I1210 06:28:08.863410  401365 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1210 06:28:08.863415  401365 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1210 06:28:08.863419  401365 command_runner.go:130] > # hooks_dir = [
	I1210 06:28:08.863604  401365 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1210 06:28:08.863612  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863618  401365 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1210 06:28:08.863625  401365 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1210 06:28:08.863630  401365 command_runner.go:130] > # its default mounts from the following two files:
	I1210 06:28:08.863633  401365 command_runner.go:130] > #
	I1210 06:28:08.863640  401365 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1210 06:28:08.863646  401365 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1210 06:28:08.863652  401365 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1210 06:28:08.863655  401365 command_runner.go:130] > #
	I1210 06:28:08.863661  401365 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1210 06:28:08.863676  401365 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1210 06:28:08.863683  401365 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1210 06:28:08.863687  401365 command_runner.go:130] > #      only add mounts it finds in this file.
	I1210 06:28:08.863690  401365 command_runner.go:130] > #
	I1210 06:28:08.863719  401365 command_runner.go:130] > # default_mounts_file = ""
	I1210 06:28:08.863725  401365 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1210 06:28:08.863732  401365 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1210 06:28:08.863736  401365 command_runner.go:130] > # pids_limit = -1
	I1210 06:28:08.863742  401365 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1210 06:28:08.863748  401365 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1210 06:28:08.863761  401365 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1210 06:28:08.863771  401365 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1210 06:28:08.863775  401365 command_runner.go:130] > # log_size_max = -1
	I1210 06:28:08.863782  401365 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1210 06:28:08.863786  401365 command_runner.go:130] > # log_to_journald = false
	I1210 06:28:08.863792  401365 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1210 06:28:08.863974  401365 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1210 06:28:08.863984  401365 command_runner.go:130] > # Path to directory for container attach sockets.
	I1210 06:28:08.863990  401365 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1210 06:28:08.863996  401365 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1210 06:28:08.864082  401365 command_runner.go:130] > # bind_mount_prefix = ""
	I1210 06:28:08.864098  401365 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1210 06:28:08.864139  401365 command_runner.go:130] > # read_only = false
	I1210 06:28:08.864149  401365 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1210 06:28:08.864156  401365 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1210 06:28:08.864159  401365 command_runner.go:130] > # live configuration reload.
	I1210 06:28:08.864163  401365 command_runner.go:130] > # log_level = "info"
	I1210 06:28:08.864169  401365 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1210 06:28:08.864174  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.864178  401365 command_runner.go:130] > # log_filter = ""
	I1210 06:28:08.864183  401365 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1210 06:28:08.864190  401365 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1210 06:28:08.864193  401365 command_runner.go:130] > # separated by comma.
	I1210 06:28:08.864208  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864211  401365 command_runner.go:130] > # uid_mappings = ""
	I1210 06:28:08.864218  401365 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1210 06:28:08.864224  401365 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1210 06:28:08.864228  401365 command_runner.go:130] > # separated by comma.
	I1210 06:28:08.864236  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864440  401365 command_runner.go:130] > # gid_mappings = ""
	I1210 06:28:08.864451  401365 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1210 06:28:08.864458  401365 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 06:28:08.864465  401365 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 06:28:08.864473  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864477  401365 command_runner.go:130] > # minimum_mappable_uid = -1
	I1210 06:28:08.864483  401365 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1210 06:28:08.864493  401365 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 06:28:08.864501  401365 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 06:28:08.864514  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864541  401365 command_runner.go:130] > # minimum_mappable_gid = -1
	I1210 06:28:08.864548  401365 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1210 06:28:08.864555  401365 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1210 06:28:08.864560  401365 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1210 06:28:08.864572  401365 command_runner.go:130] > # ctr_stop_timeout = 30
	I1210 06:28:08.864578  401365 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1210 06:28:08.864588  401365 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1210 06:28:08.864593  401365 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1210 06:28:08.864598  401365 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1210 06:28:08.864602  401365 command_runner.go:130] > # drop_infra_ctr = true
	I1210 06:28:08.864608  401365 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1210 06:28:08.864613  401365 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1210 06:28:08.864621  401365 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1210 06:28:08.864625  401365 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1210 06:28:08.864632  401365 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1210 06:28:08.864638  401365 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1210 06:28:08.864644  401365 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1210 06:28:08.864649  401365 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1210 06:28:08.864653  401365 command_runner.go:130] > # shared_cpuset = ""
	I1210 06:28:08.864659  401365 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1210 06:28:08.864664  401365 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1210 06:28:08.864668  401365 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1210 06:28:08.864675  401365 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1210 06:28:08.864858  401365 command_runner.go:130] > # pinns_path = ""
	I1210 06:28:08.864869  401365 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1210 06:28:08.864876  401365 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1210 06:28:08.864881  401365 command_runner.go:130] > # enable_criu_support = true
	I1210 06:28:08.864886  401365 command_runner.go:130] > # Enable/disable the generation of the container,
	I1210 06:28:08.864892  401365 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1210 06:28:08.864935  401365 command_runner.go:130] > # enable_pod_events = false
	I1210 06:28:08.864946  401365 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1210 06:28:08.864960  401365 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1210 06:28:08.865092  401365 command_runner.go:130] > # default_runtime = "crun"
	I1210 06:28:08.865104  401365 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1210 06:28:08.865112  401365 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1210 06:28:08.865122  401365 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1210 06:28:08.865127  401365 command_runner.go:130] > # creation as a file is not desired either.
	I1210 06:28:08.865136  401365 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1210 06:28:08.865141  401365 command_runner.go:130] > # the hostname is being managed dynamically.
	I1210 06:28:08.865146  401365 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1210 06:28:08.865148  401365 command_runner.go:130] > # ]
	I1210 06:28:08.865158  401365 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1210 06:28:08.865165  401365 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1210 06:28:08.865171  401365 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1210 06:28:08.865177  401365 command_runner.go:130] > # Each entry in the table should follow the format:
	I1210 06:28:08.865179  401365 command_runner.go:130] > #
	I1210 06:28:08.865200  401365 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1210 06:28:08.865207  401365 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1210 06:28:08.865210  401365 command_runner.go:130] > # runtime_type = "oci"
	I1210 06:28:08.865215  401365 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1210 06:28:08.865219  401365 command_runner.go:130] > # inherit_default_runtime = false
	I1210 06:28:08.865224  401365 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1210 06:28:08.865229  401365 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1210 06:28:08.865233  401365 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1210 06:28:08.865236  401365 command_runner.go:130] > # monitor_env = []
	I1210 06:28:08.865241  401365 command_runner.go:130] > # privileged_without_host_devices = false
	I1210 06:28:08.865245  401365 command_runner.go:130] > # allowed_annotations = []
	I1210 06:28:08.865250  401365 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1210 06:28:08.865253  401365 command_runner.go:130] > # no_sync_log = false
	I1210 06:28:08.865257  401365 command_runner.go:130] > # default_annotations = {}
	I1210 06:28:08.865261  401365 command_runner.go:130] > # stream_websockets = false
	I1210 06:28:08.865265  401365 command_runner.go:130] > # seccomp_profile = ""
	I1210 06:28:08.865296  401365 command_runner.go:130] > # Where:
	I1210 06:28:08.865301  401365 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1210 06:28:08.865308  401365 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1210 06:28:08.865314  401365 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1210 06:28:08.865320  401365 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1210 06:28:08.865323  401365 command_runner.go:130] > #   in $PATH.
	I1210 06:28:08.865330  401365 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1210 06:28:08.865334  401365 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1210 06:28:08.865341  401365 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1210 06:28:08.865344  401365 command_runner.go:130] > #   state.
	I1210 06:28:08.865352  401365 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1210 06:28:08.865360  401365 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1210 06:28:08.865368  401365 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1210 06:28:08.865376  401365 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1210 06:28:08.865381  401365 command_runner.go:130] > #   the values from the default runtime on load time.
	I1210 06:28:08.865387  401365 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1210 06:28:08.865392  401365 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1210 06:28:08.865399  401365 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1210 06:28:08.865406  401365 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1210 06:28:08.865411  401365 command_runner.go:130] > #   The currently recognized values are:
	I1210 06:28:08.865417  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1210 06:28:08.865425  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1210 06:28:08.865431  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1210 06:28:08.865437  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1210 06:28:08.865444  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1210 06:28:08.865451  401365 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1210 06:28:08.865458  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1210 06:28:08.865464  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1210 06:28:08.865470  401365 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1210 06:28:08.865492  401365 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1210 06:28:08.865501  401365 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1210 06:28:08.865507  401365 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1210 06:28:08.865513  401365 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1210 06:28:08.865519  401365 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1210 06:28:08.865525  401365 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1210 06:28:08.865533  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1210 06:28:08.865539  401365 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1210 06:28:08.865552  401365 command_runner.go:130] > #   deprecated option "conmon".
	I1210 06:28:08.865560  401365 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1210 06:28:08.865565  401365 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1210 06:28:08.865572  401365 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1210 06:28:08.865578  401365 command_runner.go:130] > #   should be moved to the container's cgroup
	I1210 06:28:08.865587  401365 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1210 06:28:08.865592  401365 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1210 06:28:08.865599  401365 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1210 06:28:08.865607  401365 command_runner.go:130] > #   conmon-rs by using:
	I1210 06:28:08.865615  401365 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1210 06:28:08.865622  401365 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1210 06:28:08.865630  401365 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1210 06:28:08.865636  401365 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1210 06:28:08.865642  401365 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1210 06:28:08.865649  401365 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1210 06:28:08.865657  401365 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1210 06:28:08.865661  401365 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1210 06:28:08.865669  401365 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1210 06:28:08.865677  401365 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1210 06:28:08.865685  401365 command_runner.go:130] > #   when a machine crash happens.
	I1210 06:28:08.865693  401365 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1210 06:28:08.865700  401365 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1210 06:28:08.865708  401365 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1210 06:28:08.865713  401365 command_runner.go:130] > #   seccomp profile for the runtime.
	I1210 06:28:08.865719  401365 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1210 06:28:08.865744  401365 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1210 06:28:08.865747  401365 command_runner.go:130] > #
	I1210 06:28:08.865751  401365 command_runner.go:130] > # Using the seccomp notifier feature:
	I1210 06:28:08.865754  401365 command_runner.go:130] > #
	I1210 06:28:08.865762  401365 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1210 06:28:08.865768  401365 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1210 06:28:08.865771  401365 command_runner.go:130] > #
	I1210 06:28:08.865777  401365 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1210 06:28:08.865783  401365 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1210 06:28:08.865785  401365 command_runner.go:130] > #
	I1210 06:28:08.865793  401365 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1210 06:28:08.865797  401365 command_runner.go:130] > # feature.
	I1210 06:28:08.865800  401365 command_runner.go:130] > #
	I1210 06:28:08.865807  401365 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1210 06:28:08.865813  401365 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1210 06:28:08.865819  401365 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1210 06:28:08.865832  401365 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1210 06:28:08.865838  401365 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1210 06:28:08.865841  401365 command_runner.go:130] > #
	I1210 06:28:08.865847  401365 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1210 06:28:08.865853  401365 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1210 06:28:08.865856  401365 command_runner.go:130] > #
	I1210 06:28:08.865862  401365 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1210 06:28:08.865870  401365 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1210 06:28:08.865873  401365 command_runner.go:130] > #
	I1210 06:28:08.865880  401365 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1210 06:28:08.865885  401365 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1210 06:28:08.865889  401365 command_runner.go:130] > # limitation.
	I1210 06:28:08.865905  401365 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1210 06:28:08.866331  401365 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1210 06:28:08.866426  401365 command_runner.go:130] > runtime_type = ""
	I1210 06:28:08.866446  401365 command_runner.go:130] > runtime_root = "/run/crun"
	I1210 06:28:08.866464  401365 command_runner.go:130] > inherit_default_runtime = false
	I1210 06:28:08.866497  401365 command_runner.go:130] > runtime_config_path = ""
	I1210 06:28:08.866524  401365 command_runner.go:130] > container_min_memory = ""
	I1210 06:28:08.866577  401365 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 06:28:08.866606  401365 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 06:28:08.866632  401365 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 06:28:08.866654  401365 command_runner.go:130] > allowed_annotations = [
	I1210 06:28:08.866675  401365 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1210 06:28:08.866694  401365 command_runner.go:130] > ]
	I1210 06:28:08.866728  401365 command_runner.go:130] > privileged_without_host_devices = false
	I1210 06:28:08.866748  401365 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1210 06:28:08.866769  401365 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1210 06:28:08.866790  401365 command_runner.go:130] > runtime_type = ""
	I1210 06:28:08.866821  401365 command_runner.go:130] > runtime_root = "/run/runc"
	I1210 06:28:08.866840  401365 command_runner.go:130] > inherit_default_runtime = false
	I1210 06:28:08.866860  401365 command_runner.go:130] > runtime_config_path = ""
	I1210 06:28:08.866880  401365 command_runner.go:130] > container_min_memory = ""
	I1210 06:28:08.866908  401365 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 06:28:08.866932  401365 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 06:28:08.866953  401365 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 06:28:08.866974  401365 command_runner.go:130] > privileged_without_host_devices = false
	I1210 06:28:08.867007  401365 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1210 06:28:08.867043  401365 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1210 06:28:08.867068  401365 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1210 06:28:08.867104  401365 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1210 06:28:08.867134  401365 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1210 06:28:08.867162  401365 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1210 06:28:08.867185  401365 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1210 06:28:08.867213  401365 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1210 06:28:08.867246  401365 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1210 06:28:08.867272  401365 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1210 06:28:08.867293  401365 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1210 06:28:08.867324  401365 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1210 06:28:08.867347  401365 command_runner.go:130] > # Example:
	I1210 06:28:08.867368  401365 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1210 06:28:08.867390  401365 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1210 06:28:08.867422  401365 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1210 06:28:08.867444  401365 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1210 06:28:08.867461  401365 command_runner.go:130] > # cpuset = "0-1"
	I1210 06:28:08.867481  401365 command_runner.go:130] > # cpushares = "5"
	I1210 06:28:08.867501  401365 command_runner.go:130] > # cpuquota = "1000"
	I1210 06:28:08.867527  401365 command_runner.go:130] > # cpuperiod = "100000"
	I1210 06:28:08.867550  401365 command_runner.go:130] > # cpulimit = "35"
	I1210 06:28:08.867570  401365 command_runner.go:130] > # Where:
	I1210 06:28:08.867591  401365 command_runner.go:130] > # The workload name is workload-type.
	I1210 06:28:08.867625  401365 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1210 06:28:08.867647  401365 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1210 06:28:08.867667  401365 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1210 06:28:08.867691  401365 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1210 06:28:08.867724  401365 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1210 06:28:08.867747  401365 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1210 06:28:08.867767  401365 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1210 06:28:08.867786  401365 command_runner.go:130] > # Default value is set to true
	I1210 06:28:08.867808  401365 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1210 06:28:08.867842  401365 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1210 06:28:08.867862  401365 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1210 06:28:08.867882  401365 command_runner.go:130] > # Default value is set to 'false'
	I1210 06:28:08.867915  401365 command_runner.go:130] > # disable_hostport_mapping = false
	I1210 06:28:08.867942  401365 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1210 06:28:08.867964  401365 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1210 06:28:08.867982  401365 command_runner.go:130] > # timezone = ""
	I1210 06:28:08.868015  401365 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1210 06:28:08.868041  401365 command_runner.go:130] > #
	I1210 06:28:08.868060  401365 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1210 06:28:08.868081  401365 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1210 06:28:08.868110  401365 command_runner.go:130] > [crio.image]
	I1210 06:28:08.868133  401365 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1210 06:28:08.868150  401365 command_runner.go:130] > # default_transport = "docker://"
	I1210 06:28:08.868170  401365 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1210 06:28:08.868192  401365 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1210 06:28:08.868219  401365 command_runner.go:130] > # global_auth_file = ""
	I1210 06:28:08.868243  401365 command_runner.go:130] > # The image used to instantiate infra containers.
	I1210 06:28:08.868264  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.868284  401365 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1210 06:28:08.868317  401365 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1210 06:28:08.868338  401365 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1210 06:28:08.868357  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.868374  401365 command_runner.go:130] > # pause_image_auth_file = ""
	I1210 06:28:08.868396  401365 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1210 06:28:08.868423  401365 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1210 06:28:08.868450  401365 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1210 06:28:08.868474  401365 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1210 06:28:08.868753  401365 command_runner.go:130] > # pause_command = "/pause"
	I1210 06:28:08.868765  401365 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1210 06:28:08.868772  401365 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1210 06:28:08.868778  401365 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1210 06:28:08.868784  401365 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1210 06:28:08.868791  401365 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1210 06:28:08.868797  401365 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1210 06:28:08.868802  401365 command_runner.go:130] > # pinned_images = [
	I1210 06:28:08.868834  401365 command_runner.go:130] > # ]
	I1210 06:28:08.868841  401365 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1210 06:28:08.868848  401365 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1210 06:28:08.868855  401365 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1210 06:28:08.868864  401365 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1210 06:28:08.868877  401365 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1210 06:28:08.868892  401365 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1210 06:28:08.868897  401365 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1210 06:28:08.868904  401365 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1210 06:28:08.868911  401365 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1210 06:28:08.868917  401365 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1210 06:28:08.868924  401365 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1210 06:28:08.868928  401365 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1210 06:28:08.868935  401365 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1210 06:28:08.868941  401365 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1210 06:28:08.868945  401365 command_runner.go:130] > # changing them here.
	I1210 06:28:08.868950  401365 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1210 06:28:08.868954  401365 command_runner.go:130] > # insecure_registries = [
	I1210 06:28:08.868957  401365 command_runner.go:130] > # ]
	I1210 06:28:08.868964  401365 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1210 06:28:08.868968  401365 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1210 06:28:08.868972  401365 command_runner.go:130] > # image_volumes = "mkdir"
	I1210 06:28:08.868978  401365 command_runner.go:130] > # Temporary directory to use for storing big files
	I1210 06:28:08.868982  401365 command_runner.go:130] > # big_files_temporary_dir = ""
	I1210 06:28:08.868988  401365 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1210 06:28:08.868995  401365 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1210 06:28:08.868999  401365 command_runner.go:130] > # auto_reload_registries = false
	I1210 06:28:08.869006  401365 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1210 06:28:08.869014  401365 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1210 06:28:08.869022  401365 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1210 06:28:08.869027  401365 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1210 06:28:08.869031  401365 command_runner.go:130] > # The mode of short name resolution.
	I1210 06:28:08.869039  401365 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1210 06:28:08.869047  401365 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1210 06:28:08.869051  401365 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1210 06:28:08.869055  401365 command_runner.go:130] > # short_name_mode = "enforcing"
	I1210 06:28:08.869061  401365 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1210 06:28:08.869067  401365 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1210 06:28:08.869299  401365 command_runner.go:130] > # oci_artifact_mount_support = true
	I1210 06:28:08.869316  401365 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1210 06:28:08.869329  401365 command_runner.go:130] > # CNI plugins.
	I1210 06:28:08.869333  401365 command_runner.go:130] > [crio.network]
	I1210 06:28:08.869340  401365 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1210 06:28:08.869346  401365 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1210 06:28:08.869485  401365 command_runner.go:130] > # cni_default_network = ""
	I1210 06:28:08.869502  401365 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1210 06:28:08.869709  401365 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1210 06:28:08.869721  401365 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1210 06:28:08.869725  401365 command_runner.go:130] > # plugin_dirs = [
	I1210 06:28:08.869729  401365 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1210 06:28:08.869732  401365 command_runner.go:130] > # ]
	I1210 06:28:08.869736  401365 command_runner.go:130] > # List of included pod metrics.
	I1210 06:28:08.869740  401365 command_runner.go:130] > # included_pod_metrics = [
	I1210 06:28:08.869743  401365 command_runner.go:130] > # ]
	I1210 06:28:08.869749  401365 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1210 06:28:08.869752  401365 command_runner.go:130] > [crio.metrics]
	I1210 06:28:08.869757  401365 command_runner.go:130] > # Globally enable or disable metrics support.
	I1210 06:28:08.869763  401365 command_runner.go:130] > # enable_metrics = false
	I1210 06:28:08.869767  401365 command_runner.go:130] > # Specify enabled metrics collectors.
	I1210 06:28:08.869772  401365 command_runner.go:130] > # Per default all metrics are enabled.
	I1210 06:28:08.869778  401365 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1210 06:28:08.869785  401365 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1210 06:28:08.869791  401365 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1210 06:28:08.869796  401365 command_runner.go:130] > # metrics_collectors = [
	I1210 06:28:08.869800  401365 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1210 06:28:08.869805  401365 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1210 06:28:08.869809  401365 command_runner.go:130] > # 	"containers_oom_total",
	I1210 06:28:08.869813  401365 command_runner.go:130] > # 	"processes_defunct",
	I1210 06:28:08.869817  401365 command_runner.go:130] > # 	"operations_total",
	I1210 06:28:08.869821  401365 command_runner.go:130] > # 	"operations_latency_seconds",
	I1210 06:28:08.869826  401365 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1210 06:28:08.869830  401365 command_runner.go:130] > # 	"operations_errors_total",
	I1210 06:28:08.869834  401365 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1210 06:28:08.869839  401365 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1210 06:28:08.869843  401365 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1210 06:28:08.869851  401365 command_runner.go:130] > # 	"image_pulls_success_total",
	I1210 06:28:08.869855  401365 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1210 06:28:08.869860  401365 command_runner.go:130] > # 	"containers_oom_count_total",
	I1210 06:28:08.869865  401365 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1210 06:28:08.869873  401365 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1210 06:28:08.869878  401365 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1210 06:28:08.869881  401365 command_runner.go:130] > # ]
	I1210 06:28:08.869887  401365 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1210 06:28:08.869891  401365 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1210 06:28:08.869896  401365 command_runner.go:130] > # The port on which the metrics server will listen.
	I1210 06:28:08.869901  401365 command_runner.go:130] > # metrics_port = 9090
	I1210 06:28:08.869906  401365 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1210 06:28:08.869910  401365 command_runner.go:130] > # metrics_socket = ""
	I1210 06:28:08.869915  401365 command_runner.go:130] > # The certificate for the secure metrics server.
	I1210 06:28:08.869921  401365 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1210 06:28:08.869928  401365 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1210 06:28:08.869934  401365 command_runner.go:130] > # certificate on any modification event.
	I1210 06:28:08.869938  401365 command_runner.go:130] > # metrics_cert = ""
	I1210 06:28:08.869943  401365 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1210 06:28:08.869948  401365 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1210 06:28:08.869963  401365 command_runner.go:130] > # metrics_key = ""
	I1210 06:28:08.869970  401365 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1210 06:28:08.869973  401365 command_runner.go:130] > [crio.tracing]
	I1210 06:28:08.869978  401365 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1210 06:28:08.869982  401365 command_runner.go:130] > # enable_tracing = false
	I1210 06:28:08.869987  401365 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1210 06:28:08.869992  401365 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1210 06:28:08.869999  401365 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1210 06:28:08.870003  401365 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1210 06:28:08.870007  401365 command_runner.go:130] > # CRI-O NRI configuration.
	I1210 06:28:08.870010  401365 command_runner.go:130] > [crio.nri]
	I1210 06:28:08.870014  401365 command_runner.go:130] > # Globally enable or disable NRI.
	I1210 06:28:08.870018  401365 command_runner.go:130] > # enable_nri = true
	I1210 06:28:08.870022  401365 command_runner.go:130] > # NRI socket to listen on.
	I1210 06:28:08.870026  401365 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1210 06:28:08.870031  401365 command_runner.go:130] > # NRI plugin directory to use.
	I1210 06:28:08.870035  401365 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1210 06:28:08.870044  401365 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1210 06:28:08.870049  401365 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1210 06:28:08.870054  401365 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1210 06:28:08.870120  401365 command_runner.go:130] > # nri_disable_connections = false
	I1210 06:28:08.870126  401365 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1210 06:28:08.870131  401365 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1210 06:28:08.870136  401365 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1210 06:28:08.870140  401365 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1210 06:28:08.870144  401365 command_runner.go:130] > # NRI default validator configuration.
	I1210 06:28:08.870151  401365 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1210 06:28:08.870158  401365 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1210 06:28:08.870166  401365 command_runner.go:130] > # can be restricted/rejected:
	I1210 06:28:08.870170  401365 command_runner.go:130] > # - OCI hook injection
	I1210 06:28:08.870176  401365 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1210 06:28:08.870182  401365 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1210 06:28:08.870187  401365 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1210 06:28:08.870192  401365 command_runner.go:130] > # - adjustment of linux namespaces
	I1210 06:28:08.870198  401365 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1210 06:28:08.870204  401365 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1210 06:28:08.870211  401365 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1210 06:28:08.870214  401365 command_runner.go:130] > #
	I1210 06:28:08.870219  401365 command_runner.go:130] > # [crio.nri.default_validator]
	I1210 06:28:08.870224  401365 command_runner.go:130] > # nri_enable_default_validator = false
	I1210 06:28:08.870229  401365 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1210 06:28:08.870235  401365 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1210 06:28:08.870240  401365 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1210 06:28:08.870245  401365 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1210 06:28:08.870249  401365 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1210 06:28:08.870254  401365 command_runner.go:130] > # nri_validator_required_plugins = [
	I1210 06:28:08.870256  401365 command_runner.go:130] > # ]
	I1210 06:28:08.870261  401365 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1210 06:28:08.870267  401365 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1210 06:28:08.870270  401365 command_runner.go:130] > [crio.stats]
	I1210 06:28:08.870279  401365 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1210 06:28:08.870285  401365 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1210 06:28:08.870289  401365 command_runner.go:130] > # stats_collection_period = 0
	I1210 06:28:08.870295  401365 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1210 06:28:08.870301  401365 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1210 06:28:08.870309  401365 command_runner.go:130] > # collection_period = 0
	I1210 06:28:08.872234  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.838776003Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1210 06:28:08.872284  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.838812886Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1210 06:28:08.872309  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.838840094Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1210 06:28:08.872334  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.839193559Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1210 06:28:08.872381  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.839375723Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:08.872413  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.839707715Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1210 06:28:08.872441  401365 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1210 06:28:08.872553  401365 cni.go:84] Creating CNI manager for ""
	I1210 06:28:08.872583  401365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:28:08.872624  401365 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:28:08.872677  401365 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-253997 NodeName:functional-253997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:28:08.872842  401365 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-253997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:28:08.872963  401365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:28:08.882589  401365 command_runner.go:130] > kubeadm
	I1210 06:28:08.882664  401365 command_runner.go:130] > kubectl
	I1210 06:28:08.882683  401365 command_runner.go:130] > kubelet
	I1210 06:28:08.883772  401365 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:28:08.883860  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:28:08.894311  401365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 06:28:08.917477  401365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:28:08.933123  401365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1210 06:28:08.951215  401365 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:28:08.955022  401365 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 06:28:08.955137  401365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:28:09.068336  401365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:28:09.626369  401365 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997 for IP: 192.168.49.2
	I1210 06:28:09.626393  401365 certs.go:195] generating shared ca certs ...
	I1210 06:28:09.626411  401365 certs.go:227] acquiring lock for ca certs: {Name:mk6fdc2cadbd147112797261b2432f4e1e90b685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:09.626560  401365 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key
	I1210 06:28:09.626610  401365 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key
	I1210 06:28:09.626622  401365 certs.go:257] generating profile certs ...
	I1210 06:28:09.626723  401365 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key
	I1210 06:28:09.626797  401365 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key.d56e9423
	I1210 06:28:09.626842  401365 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key
	I1210 06:28:09.626855  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 06:28:09.626868  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 06:28:09.626879  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 06:28:09.626895  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 06:28:09.626917  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 06:28:09.626934  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 06:28:09.626951  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 06:28:09.626967  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 06:28:09.627018  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem (1338 bytes)
	W1210 06:28:09.627054  401365 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265_empty.pem, impossibly tiny 0 bytes
	I1210 06:28:09.627067  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:28:09.627098  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:28:09.627129  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:28:09.627160  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem (1675 bytes)
	I1210 06:28:09.627208  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:28:09.627243  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.627257  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem -> /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.627269  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> /usr/share/ca-certificates/3642652.pem
	I1210 06:28:09.627907  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:28:09.646839  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:28:09.665451  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:28:09.684144  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:28:09.703168  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:28:09.722766  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:28:09.740755  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:28:09.758979  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:28:09.777915  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:28:09.796193  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem --> /usr/share/ca-certificates/364265.pem (1338 bytes)
	I1210 06:28:09.814097  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /usr/share/ca-certificates/3642652.pem (1708 bytes)
	I1210 06:28:09.831978  401365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:28:09.845391  401365 ssh_runner.go:195] Run: openssl version
	I1210 06:28:09.851779  401365 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 06:28:09.852274  401365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.860146  401365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:28:09.868064  401365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.872198  401365 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.872310  401365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.872381  401365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.915298  401365 command_runner.go:130] > b5213941
	I1210 06:28:09.915776  401365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:28:09.923881  401365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.931564  401365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364265.pem /etc/ssl/certs/364265.pem
	I1210 06:28:09.939347  401365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.943515  401365 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.943602  401365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.943706  401365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.984596  401365 command_runner.go:130] > 51391683
	I1210 06:28:09.985095  401365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:28:09.992884  401365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.000682  401365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3642652.pem /etc/ssl/certs/3642652.pem
	I1210 06:28:10.009973  401365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.015475  401365 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.015546  401365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.015611  401365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.058412  401365 command_runner.go:130] > 3ec20f2e
	I1210 06:28:10.059028  401365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:28:10.067481  401365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:28:10.072097  401365 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:28:10.072141  401365 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 06:28:10.072148  401365 command_runner.go:130] > Device: 259,1	Inode: 3906312     Links: 1
	I1210 06:28:10.072155  401365 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:28:10.072162  401365 command_runner.go:130] > Access: 2025-12-10 06:24:00.744386425 +0000
	I1210 06:28:10.072185  401365 command_runner.go:130] > Modify: 2025-12-10 06:19:55.737291822 +0000
	I1210 06:28:10.072211  401365 command_runner.go:130] > Change: 2025-12-10 06:19:55.737291822 +0000
	I1210 06:28:10.072217  401365 command_runner.go:130] >  Birth: 2025-12-10 06:19:55.737291822 +0000
	I1210 06:28:10.072295  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:28:10.114065  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.114701  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:28:10.156441  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.157041  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:28:10.198547  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.198997  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:28:10.239473  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.239921  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:28:10.280741  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.281284  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:28:10.322073  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.322510  401365 kubeadm.go:401] StartCluster: {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:28:10.322592  401365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:28:10.322670  401365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:28:10.349813  401365 cri.go:89] found id: ""
	I1210 06:28:10.349915  401365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:28:10.357053  401365 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 06:28:10.357076  401365 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 06:28:10.357083  401365 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 06:28:10.358087  401365 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:28:10.358107  401365 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:28:10.358179  401365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:28:10.366355  401365 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:28:10.366773  401365 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-253997" does not appear in /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.366892  401365 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-362392/kubeconfig needs updating (will repair): [kubeconfig missing "functional-253997" cluster setting kubeconfig missing "functional-253997" context setting]
	I1210 06:28:10.367176  401365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/kubeconfig: {Name:mk64e356b1eb31ef8982d6b594101aff5f90a6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:10.367620  401365 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.367775  401365 kapi.go:59] client config for functional-253997: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key", CAFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:28:10.368328  401365 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:28:10.368348  401365 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:28:10.368357  401365 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:28:10.368361  401365 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:28:10.368366  401365 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:28:10.368683  401365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:28:10.368778  401365 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 06:28:10.376809  401365 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 06:28:10.376842  401365 kubeadm.go:602] duration metric: took 18.728652ms to restartPrimaryControlPlane
	I1210 06:28:10.376852  401365 kubeadm.go:403] duration metric: took 54.348915ms to StartCluster
	I1210 06:28:10.376867  401365 settings.go:142] acquiring lock: {Name:mk50f3096e55008942432e93c6cf4976a70e957e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:10.376930  401365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.377580  401365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/kubeconfig: {Name:mk64e356b1eb31ef8982d6b594101aff5f90a6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:10.377783  401365 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:28:10.378131  401365 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:28:10.378203  401365 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:28:10.378273  401365 addons.go:70] Setting storage-provisioner=true in profile "functional-253997"
	I1210 06:28:10.378288  401365 addons.go:239] Setting addon storage-provisioner=true in "functional-253997"
	I1210 06:28:10.378298  401365 addons.go:70] Setting default-storageclass=true in profile "functional-253997"
	I1210 06:28:10.378308  401365 host.go:66] Checking if "functional-253997" exists ...
	I1210 06:28:10.378325  401365 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-253997"
	I1210 06:28:10.378609  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:10.378772  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:10.382148  401365 out.go:179] * Verifying Kubernetes components...
	I1210 06:28:10.385829  401365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:28:10.411769  401365 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.411927  401365 kapi.go:59] client config for functional-253997: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key", CAFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:28:10.412189  401365 addons.go:239] Setting addon default-storageclass=true in "functional-253997"
	I1210 06:28:10.412217  401365 host.go:66] Checking if "functional-253997" exists ...
	I1210 06:28:10.412622  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:10.423310  401365 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:28:10.429289  401365 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:10.429319  401365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:28:10.429390  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:10.437508  401365 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:10.437529  401365 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:28:10.437602  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:10.484090  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:10.489523  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:10.601993  401365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:28:10.611397  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:10.637290  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:11.377346  401365 node_ready.go:35] waiting up to 6m0s for node "functional-253997" to be "Ready" ...
	I1210 06:28:11.377544  401365 type.go:168] "Request Body" body=""
	I1210 06:28:11.377656  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.377728  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	W1210 06:28:11.377850  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.377894  401365 retry.go:31] will retry after 259.470683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.378104  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.378200  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.378242  401365 retry.go:31] will retry after 196.4073ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.378345  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:11.575829  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:11.638697  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:11.638779  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.638826  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.638871  401365 retry.go:31] will retry after 208.428392ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.692820  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.696338  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.696370  401365 retry.go:31] will retry after 282.781918ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.847619  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:11.878109  401365 type.go:168] "Request Body" body=""
	I1210 06:28:11.878199  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:11.878519  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:11.905645  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.908839  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.908880  401365 retry.go:31] will retry after 582.02813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.980121  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:12.039691  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:12.043135  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.043170  401365 retry.go:31] will retry after 432.314142ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.377666  401365 type.go:168] "Request Body" body=""
	I1210 06:28:12.377744  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:12.378081  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:12.476496  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:12.492099  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:12.562290  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:12.562336  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.562356  401365 retry.go:31] will retry after 1.009011504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.562409  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:12.562427  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.562433  401365 retry.go:31] will retry after 937.221861ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.877643  401365 type.go:168] "Request Body" body=""
	I1210 06:28:12.877787  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:12.878157  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:13.377677  401365 type.go:168] "Request Body" body=""
	I1210 06:28:13.377754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:13.378100  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:13.378160  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:13.500598  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:13.556443  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:13.560062  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.560116  401365 retry.go:31] will retry after 1.265541277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.572329  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:13.633856  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:13.637464  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.637509  401365 retry.go:31] will retry after 1.331173049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.877793  401365 type.go:168] "Request Body" body=""
	I1210 06:28:13.877888  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:13.878199  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:14.377638  401365 type.go:168] "Request Body" body=""
	I1210 06:28:14.377730  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:14.378082  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:14.825793  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:14.878190  401365 type.go:168] "Request Body" body=""
	I1210 06:28:14.878261  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:14.878521  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:14.884055  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:14.884152  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:14.884201  401365 retry.go:31] will retry after 1.396995132s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:14.969467  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:15.059973  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:15.064387  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:15.064489  401365 retry.go:31] will retry after 957.92161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:15.377700  401365 type.go:168] "Request Body" body=""
	I1210 06:28:15.377772  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:15.378126  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:15.378206  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:15.877555  401365 type.go:168] "Request Body" body=""
	I1210 06:28:15.877664  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:15.877987  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:16.023398  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:16.083212  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:16.083269  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.083288  401365 retry.go:31] will retry after 3.316582994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.281469  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:16.346229  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:16.346265  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.346285  401365 retry.go:31] will retry after 2.05295153s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.378603  401365 type.go:168] "Request Body" body=""
	I1210 06:28:16.378688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:16.379017  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:16.877615  401365 type.go:168] "Request Body" body=""
	I1210 06:28:16.877690  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:16.878019  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:17.377588  401365 type.go:168] "Request Body" body=""
	I1210 06:28:17.377663  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:17.377946  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:17.877651  401365 type.go:168] "Request Body" body=""
	I1210 06:28:17.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:17.878120  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:17.878201  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:18.377673  401365 type.go:168] "Request Body" body=""
	I1210 06:28:18.377749  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:18.378108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:18.400386  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:18.462469  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:18.462509  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:18.462528  401365 retry.go:31] will retry after 3.621738225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:18.877637  401365 type.go:168] "Request Body" body=""
	I1210 06:28:18.877719  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:18.878031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:19.377699  401365 type.go:168] "Request Body" body=""
	I1210 06:28:19.377775  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:19.378123  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:19.400389  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:19.462507  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:19.462542  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:19.462562  401365 retry.go:31] will retry after 6.347571238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:19.878220  401365 type.go:168] "Request Body" body=""
	I1210 06:28:19.878303  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:19.878573  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:19.878624  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:20.378571  401365 type.go:168] "Request Body" body=""
	I1210 06:28:20.378643  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:20.378957  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:20.877707  401365 type.go:168] "Request Body" body=""
	I1210 06:28:20.877781  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:20.878082  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:21.377732  401365 type.go:168] "Request Body" body=""
	I1210 06:28:21.377840  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:21.378217  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:21.877933  401365 type.go:168] "Request Body" body=""
	I1210 06:28:21.878005  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:21.878280  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:22.084823  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:22.150796  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:22.150852  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:22.150872  401365 retry.go:31] will retry after 8.518894464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:22.378239  401365 type.go:168] "Request Body" body=""
	I1210 06:28:22.378314  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:22.378638  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:22.378700  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:22.878392  401365 type.go:168] "Request Body" body=""
	I1210 06:28:22.878470  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:22.878811  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:23.378412  401365 type.go:168] "Request Body" body=""
	I1210 06:28:23.378493  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:23.378816  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:23.878580  401365 type.go:168] "Request Body" body=""
	I1210 06:28:23.878657  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:23.879035  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:24.377745  401365 type.go:168] "Request Body" body=""
	I1210 06:28:24.377829  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:24.378165  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:24.878042  401365 type.go:168] "Request Body" body=""
	I1210 06:28:24.878110  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:24.878379  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:24.878424  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:25.378073  401365 type.go:168] "Request Body" body=""
	I1210 06:28:25.378148  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:25.378492  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:25.811094  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:25.867131  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:25.870279  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:25.870312  401365 retry.go:31] will retry after 4.064346895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:25.878534  401365 type.go:168] "Request Body" body=""
	I1210 06:28:25.878610  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:25.878933  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:26.378357  401365 type.go:168] "Request Body" body=""
	I1210 06:28:26.378423  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:26.378728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:26.878539  401365 type.go:168] "Request Body" body=""
	I1210 06:28:26.878615  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:26.878903  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:26.878950  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:27.377638  401365 type.go:168] "Request Body" body=""
	I1210 06:28:27.377740  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:27.378052  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:27.878386  401365 type.go:168] "Request Body" body=""
	I1210 06:28:27.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:27.878757  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:28.378587  401365 type.go:168] "Request Body" body=""
	I1210 06:28:28.378670  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:28.379016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:28.877671  401365 type.go:168] "Request Body" body=""
	I1210 06:28:28.877745  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:28.878083  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:29.378412  401365 type.go:168] "Request Body" body=""
	I1210 06:28:29.378486  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:29.378756  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:29.378811  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:29.877691  401365 type.go:168] "Request Body" body=""
	I1210 06:28:29.877767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:29.878126  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:29.935383  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:29.993267  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:29.993316  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:29.993335  401365 retry.go:31] will retry after 13.293540925s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:30.377660  401365 type.go:168] "Request Body" body=""
	I1210 06:28:30.377733  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:30.378062  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:30.670723  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:30.731809  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:30.735358  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:30.735395  401365 retry.go:31] will retry after 6.439855049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:30.877636  401365 type.go:168] "Request Body" body=""
	I1210 06:28:30.877707  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:30.878037  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:31.377698  401365 type.go:168] "Request Body" body=""
	I1210 06:28:31.377773  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:31.378112  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:31.877691  401365 type.go:168] "Request Body" body=""
	I1210 06:28:31.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:31.878135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:31.878196  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:32.377829  401365 type.go:168] "Request Body" body=""
	I1210 06:28:32.377902  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:32.378167  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:32.877646  401365 type.go:168] "Request Body" body=""
	I1210 06:28:32.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:32.878081  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:33.377665  401365 type.go:168] "Request Body" body=""
	I1210 06:28:33.377750  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:33.378080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:33.878372  401365 type.go:168] "Request Body" body=""
	I1210 06:28:33.878447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:33.878725  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:33.878768  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:34.378621  401365 type.go:168] "Request Body" body=""
	I1210 06:28:34.378700  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:34.379046  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:34.877880  401365 type.go:168] "Request Body" body=""
	I1210 06:28:34.877952  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:34.878345  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:35.378044  401365 type.go:168] "Request Body" body=""
	I1210 06:28:35.378114  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:35.378389  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:35.878221  401365 type.go:168] "Request Body" body=""
	I1210 06:28:35.878303  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:35.878728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:35.878785  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:36.378584  401365 type.go:168] "Request Body" body=""
	I1210 06:28:36.378665  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:36.379008  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:36.878369  401365 type.go:168] "Request Body" body=""
	I1210 06:28:36.878444  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:36.878707  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:37.176405  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:37.232388  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:37.235885  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:37.235920  401365 retry.go:31] will retry after 10.78688793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:37.378207  401365 type.go:168] "Request Body" body=""
	I1210 06:28:37.378282  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:37.378581  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:37.878419  401365 type.go:168] "Request Body" body=""
	I1210 06:28:37.878495  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:37.878813  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:37.878863  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:38.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:28:38.378474  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:38.378754  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:38.878544  401365 type.go:168] "Request Body" body=""
	I1210 06:28:38.878622  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:38.878987  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:39.377713  401365 type.go:168] "Request Body" body=""
	I1210 06:28:39.377797  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:39.378129  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:39.878083  401365 type.go:168] "Request Body" body=""
	I1210 06:28:39.878150  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:39.878429  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:40.378442  401365 type.go:168] "Request Body" body=""
	I1210 06:28:40.378523  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:40.378856  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:40.378911  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:40.877583  401365 type.go:168] "Request Body" body=""
	I1210 06:28:40.877679  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:40.878034  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:41.378374  401365 type.go:168] "Request Body" body=""
	I1210 06:28:41.378447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:41.378715  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:41.878491  401365 type.go:168] "Request Body" body=""
	I1210 06:28:41.878566  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:41.878923  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:42.377671  401365 type.go:168] "Request Body" body=""
	I1210 06:28:42.377751  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:42.378141  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:42.877599  401365 type.go:168] "Request Body" body=""
	I1210 06:28:42.877683  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:42.877945  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:42.877984  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:43.287649  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:43.346928  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:43.346975  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:43.346995  401365 retry.go:31] will retry after 14.625741063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:43.378235  401365 type.go:168] "Request Body" body=""
	I1210 06:28:43.378315  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:43.378642  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:43.878431  401365 type.go:168] "Request Body" body=""
	I1210 06:28:43.878526  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:43.878848  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:44.378341  401365 type.go:168] "Request Body" body=""
	I1210 06:28:44.378412  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:44.378674  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:44.877586  401365 type.go:168] "Request Body" body=""
	I1210 06:28:44.877680  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:44.878028  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:44.878086  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:45.377798  401365 type.go:168] "Request Body" body=""
	I1210 06:28:45.377879  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:45.378245  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:45.878503  401365 type.go:168] "Request Body" body=""
	I1210 06:28:45.878572  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:45.878831  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:46.378595  401365 type.go:168] "Request Body" body=""
	I1210 06:28:46.378672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:46.378982  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:46.877682  401365 type.go:168] "Request Body" body=""
	I1210 06:28:46.877759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:46.878095  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:46.878155  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:47.377841  401365 type.go:168] "Request Body" body=""
	I1210 06:28:47.377917  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:47.378263  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:47.877992  401365 type.go:168] "Request Body" body=""
	I1210 06:28:47.878078  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:47.878438  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:48.023828  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:48.081536  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:48.084895  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:48.084933  401365 retry.go:31] will retry after 18.097374996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:48.378332  401365 type.go:168] "Request Body" body=""
	I1210 06:28:48.378422  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:48.378753  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:48.878419  401365 type.go:168] "Request Body" body=""
	I1210 06:28:48.878497  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:48.878762  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:48.878816  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:49.378574  401365 type.go:168] "Request Body" body=""
	I1210 06:28:49.378648  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:49.378945  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:49.877700  401365 type.go:168] "Request Body" body=""
	I1210 06:28:49.877800  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:49.878143  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:50.377920  401365 type.go:168] "Request Body" body=""
	I1210 06:28:50.377988  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:50.378294  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:50.877693  401365 type.go:168] "Request Body" body=""
	I1210 06:28:50.877785  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:50.878095  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:51.377686  401365 type.go:168] "Request Body" body=""
	I1210 06:28:51.377791  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:51.378134  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:51.378207  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:51.877781  401365 type.go:168] "Request Body" body=""
	I1210 06:28:51.877851  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:51.878166  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:52.377911  401365 type.go:168] "Request Body" body=""
	I1210 06:28:52.377995  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:52.378322  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:52.878024  401365 type.go:168] "Request Body" body=""
	I1210 06:28:52.878097  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:52.878439  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:53.377622  401365 type.go:168] "Request Body" body=""
	I1210 06:28:53.377713  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:53.378024  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:53.877755  401365 type.go:168] "Request Body" body=""
	I1210 06:28:53.877852  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:53.878190  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:53.878248  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:54.377697  401365 type.go:168] "Request Body" body=""
	I1210 06:28:54.377777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:54.378108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:54.877974  401365 type.go:168] "Request Body" body=""
	I1210 06:28:54.878043  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:54.878312  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:55.378006  401365 type.go:168] "Request Body" body=""
	I1210 06:28:55.378086  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:55.378481  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:55.878103  401365 type.go:168] "Request Body" body=""
	I1210 06:28:55.878195  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:55.878572  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:55.878630  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:56.378220  401365 type.go:168] "Request Body" body=""
	I1210 06:28:56.378297  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:56.378560  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:56.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:28:56.878464  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:56.878801  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:57.378592  401365 type.go:168] "Request Body" body=""
	I1210 06:28:57.378672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:57.379008  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:57.877621  401365 type.go:168] "Request Body" body=""
	I1210 06:28:57.877696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:57.878001  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:57.973321  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:58.030522  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:58.034296  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:58.034334  401365 retry.go:31] will retry after 29.63385811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:58.377818  401365 type.go:168] "Request Body" body=""
	I1210 06:28:58.377897  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:58.378240  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:58.378316  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:58.878004  401365 type.go:168] "Request Body" body=""
	I1210 06:28:58.878100  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:58.878429  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:59.378237  401365 type.go:168] "Request Body" body=""
	I1210 06:28:59.378307  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:59.378610  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:59.878397  401365 type.go:168] "Request Body" body=""
	I1210 06:28:59.878486  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:59.878865  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:00.377830  401365 type.go:168] "Request Body" body=""
	I1210 06:29:00.377911  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:00.378308  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:00.378388  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:00.877903  401365 type.go:168] "Request Body" body=""
	I1210 06:29:00.877979  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:00.878324  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:01.378045  401365 type.go:168] "Request Body" body=""
	I1210 06:29:01.378142  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:01.378492  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:01.878290  401365 type.go:168] "Request Body" body=""
	I1210 06:29:01.878364  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:01.878682  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:02.378481  401365 type.go:168] "Request Body" body=""
	I1210 06:29:02.378563  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:02.378938  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:02.379007  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:02.877673  401365 type.go:168] "Request Body" body=""
	I1210 06:29:02.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:02.878144  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:03.378397  401365 type.go:168] "Request Body" body=""
	I1210 06:29:03.378485  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:03.378752  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:03.878546  401365 type.go:168] "Request Body" body=""
	I1210 06:29:03.878622  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:03.878936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:04.377644  401365 type.go:168] "Request Body" body=""
	I1210 06:29:04.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:04.378092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:04.877915  401365 type.go:168] "Request Body" body=""
	I1210 06:29:04.877993  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:04.878265  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:04.878310  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:05.377970  401365 type.go:168] "Request Body" body=""
	I1210 06:29:05.378056  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:05.378385  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:05.877707  401365 type.go:168] "Request Body" body=""
	I1210 06:29:05.877783  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:05.878096  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:06.182558  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:29:06.240148  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:06.243928  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:29:06.243964  401365 retry.go:31] will retry after 43.852698404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:29:06.378184  401365 type.go:168] "Request Body" body=""
	I1210 06:29:06.378259  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:06.378534  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:06.878434  401365 type.go:168] "Request Body" body=""
	I1210 06:29:06.878516  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:06.878892  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:06.878963  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:07.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:29:07.377787  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:07.378114  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:07.878358  401365 type.go:168] "Request Body" body=""
	I1210 06:29:07.878442  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:07.878707  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:08.378589  401365 type.go:168] "Request Body" body=""
	I1210 06:29:08.378685  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:08.379006  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:08.877738  401365 type.go:168] "Request Body" body=""
	I1210 06:29:08.877836  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:08.878152  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:09.377599  401365 type.go:168] "Request Body" body=""
	I1210 06:29:09.377678  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:09.377964  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:09.378009  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:09.878613  401365 type.go:168] "Request Body" body=""
	I1210 06:29:09.878706  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:09.879055  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:10.377636  401365 type.go:168] "Request Body" body=""
	I1210 06:29:10.377729  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:10.378057  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:10.878414  401365 type.go:168] "Request Body" body=""
	I1210 06:29:10.878485  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:10.878853  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:11.377614  401365 type.go:168] "Request Body" body=""
	I1210 06:29:11.377691  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:11.378087  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:11.378157  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:11.877843  401365 type.go:168] "Request Body" body=""
	I1210 06:29:11.877917  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:11.878206  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:12.377859  401365 type.go:168] "Request Body" body=""
	I1210 06:29:12.377937  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:12.378284  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:12.877669  401365 type.go:168] "Request Body" body=""
	I1210 06:29:12.877752  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:12.878075  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:13.377664  401365 type.go:168] "Request Body" body=""
	I1210 06:29:13.377747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:13.378093  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:13.878416  401365 type.go:168] "Request Body" body=""
	I1210 06:29:13.878494  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:13.878809  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:13.878870  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:14.377568  401365 type.go:168] "Request Body" body=""
	I1210 06:29:14.377643  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:14.377998  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:14.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:29:14.877755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:14.878113  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:15.377598  401365 type.go:168] "Request Body" body=""
	I1210 06:29:15.377677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:15.377997  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:15.877667  401365 type.go:168] "Request Body" body=""
	I1210 06:29:15.877746  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:15.878091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:16.377666  401365 type.go:168] "Request Body" body=""
	I1210 06:29:16.377769  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:16.378076  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:16.378122  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:16.877619  401365 type.go:168] "Request Body" body=""
	I1210 06:29:16.877702  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:16.878021  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:17.377665  401365 type.go:168] "Request Body" body=""
	I1210 06:29:17.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:17.378090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:17.877642  401365 type.go:168] "Request Body" body=""
	I1210 06:29:17.877734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:17.878072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:18.378371  401365 type.go:168] "Request Body" body=""
	I1210 06:29:18.378462  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:18.378766  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:18.378828  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:18.878580  401365 type.go:168] "Request Body" body=""
	I1210 06:29:18.878658  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:18.879021  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:19.377663  401365 type.go:168] "Request Body" body=""
	I1210 06:29:19.377746  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:19.378079  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:19.877904  401365 type.go:168] "Request Body" body=""
	I1210 06:29:19.878012  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:19.878270  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:20.378288  401365 type.go:168] "Request Body" body=""
	I1210 06:29:20.378362  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:20.378707  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:20.878519  401365 type.go:168] "Request Body" body=""
	I1210 06:29:20.878594  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:20.878915  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:20.878972  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:21.377633  401365 type.go:168] "Request Body" body=""
	I1210 06:29:21.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:21.378102  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:21.877674  401365 type.go:168] "Request Body" body=""
	I1210 06:29:21.877777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:21.878104  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:22.377691  401365 type.go:168] "Request Body" body=""
	I1210 06:29:22.377786  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:22.378137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:22.877604  401365 type.go:168] "Request Body" body=""
	I1210 06:29:22.877691  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:22.877964  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:23.377675  401365 type.go:168] "Request Body" body=""
	I1210 06:29:23.377762  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:23.378116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:23.378205  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:23.877699  401365 type.go:168] "Request Body" body=""
	I1210 06:29:23.877817  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:23.878164  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:24.377849  401365 type.go:168] "Request Body" body=""
	I1210 06:29:24.377937  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:24.378276  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:24.878345  401365 type.go:168] "Request Body" body=""
	I1210 06:29:24.878419  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:24.878834  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:25.378514  401365 type.go:168] "Request Body" body=""
	I1210 06:29:25.378602  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:25.378940  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:25.378995  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:25.878340  401365 type.go:168] "Request Body" body=""
	I1210 06:29:25.878408  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:25.878688  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:26.378495  401365 type.go:168] "Request Body" body=""
	I1210 06:29:26.378583  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:26.378915  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:26.877643  401365 type.go:168] "Request Body" body=""
	I1210 06:29:26.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:26.878074  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:27.378388  401365 type.go:168] "Request Body" body=""
	I1210 06:29:27.378458  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:27.378728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:27.669323  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:29:27.726986  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:27.731088  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:27.731190  401365 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:29:27.878451  401365 type.go:168] "Request Body" body=""
	I1210 06:29:27.878523  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:27.878853  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:27.878910  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:28.378489  401365 type.go:168] "Request Body" body=""
	I1210 06:29:28.378564  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:28.378901  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:28.878380  401365 type.go:168] "Request Body" body=""
	I1210 06:29:28.878458  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:28.878719  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:29.378449  401365 type.go:168] "Request Body" body=""
	I1210 06:29:29.378529  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:29.378849  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:29.877584  401365 type.go:168] "Request Body" body=""
	I1210 06:29:29.877661  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:29.878000  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:30.377937  401365 type.go:168] "Request Body" body=""
	I1210 06:29:30.378012  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:30.378326  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:30.378387  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:30.877919  401365 type.go:168] "Request Body" body=""
	I1210 06:29:30.878019  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:30.878352  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:31.377915  401365 type.go:168] "Request Body" body=""
	I1210 06:29:31.378002  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:31.378351  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:31.878025  401365 type.go:168] "Request Body" body=""
	I1210 06:29:31.878128  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:31.878461  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:32.378235  401365 type.go:168] "Request Body" body=""
	I1210 06:29:32.378305  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:32.378637  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:32.378712  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:32.878497  401365 type.go:168] "Request Body" body=""
	I1210 06:29:32.878570  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:32.878897  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:33.378428  401365 type.go:168] "Request Body" body=""
	I1210 06:29:33.378500  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:33.378769  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:33.877562  401365 type.go:168] "Request Body" body=""
	I1210 06:29:33.877640  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:33.877963  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:34.377713  401365 type.go:168] "Request Body" body=""
	I1210 06:29:34.377821  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:34.378167  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:34.877924  401365 type.go:168] "Request Body" body=""
	I1210 06:29:34.877993  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:34.878306  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:34.878365  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:35.378234  401365 type.go:168] "Request Body" body=""
	I1210 06:29:35.378332  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:35.378669  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:35.878465  401365 type.go:168] "Request Body" body=""
	I1210 06:29:35.878539  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:35.878861  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:36.378415  401365 type.go:168] "Request Body" body=""
	I1210 06:29:36.378520  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:36.378846  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:36.877600  401365 type.go:168] "Request Body" body=""
	I1210 06:29:36.877689  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:36.878017  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:37.377713  401365 type.go:168] "Request Body" body=""
	I1210 06:29:37.377800  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:37.378154  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:37.378223  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:37.878379  401365 type.go:168] "Request Body" body=""
	I1210 06:29:37.878466  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:37.878806  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:38.378634  401365 type.go:168] "Request Body" body=""
	I1210 06:29:38.378721  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:38.379089  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:38.877647  401365 type.go:168] "Request Body" body=""
	I1210 06:29:38.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:38.878098  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:39.377834  401365 type.go:168] "Request Body" body=""
	I1210 06:29:39.377905  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:39.378160  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:39.878109  401365 type.go:168] "Request Body" body=""
	I1210 06:29:39.878184  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:39.878538  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:39.878595  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:40.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:29:40.378476  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:40.378793  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:40.878462  401365 type.go:168] "Request Body" body=""
	I1210 06:29:40.878582  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:40.878971  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:41.377652  401365 type.go:168] "Request Body" body=""
	I1210 06:29:41.377732  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:41.378135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:41.877884  401365 type.go:168] "Request Body" body=""
	I1210 06:29:41.877962  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:41.878325  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:42.377611  401365 type.go:168] "Request Body" body=""
	I1210 06:29:42.377693  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:42.378065  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:42.378123  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:42.877666  401365 type.go:168] "Request Body" body=""
	I1210 06:29:42.877738  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:42.878090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:43.377808  401365 type.go:168] "Request Body" body=""
	I1210 06:29:43.377882  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:43.378222  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:43.877625  401365 type.go:168] "Request Body" body=""
	I1210 06:29:43.877697  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:43.877990  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:44.377652  401365 type.go:168] "Request Body" body=""
	I1210 06:29:44.377728  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:44.378065  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:44.877919  401365 type.go:168] "Request Body" body=""
	I1210 06:29:44.878017  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:44.878351  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:44.878422  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:45.378184  401365 type.go:168] "Request Body" body=""
	I1210 06:29:45.378259  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:45.378530  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:45.878292  401365 type.go:168] "Request Body" body=""
	I1210 06:29:45.878369  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:45.878717  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:46.378381  401365 type.go:168] "Request Body" body=""
	I1210 06:29:46.378455  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:46.378778  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:46.878419  401365 type.go:168] "Request Body" body=""
	I1210 06:29:46.878504  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:46.878818  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:46.878868  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:47.377582  401365 type.go:168] "Request Body" body=""
	I1210 06:29:47.377662  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:47.378008  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:47.878425  401365 type.go:168] "Request Body" body=""
	I1210 06:29:47.878508  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:47.878839  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:48.378385  401365 type.go:168] "Request Body" body=""
	I1210 06:29:48.378477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:48.378769  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:48.878588  401365 type.go:168] "Request Body" body=""
	I1210 06:29:48.878669  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:48.878986  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:48.879047  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:49.377711  401365 type.go:168] "Request Body" body=""
	I1210 06:29:49.377790  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:49.378153  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:49.878038  401365 type.go:168] "Request Body" body=""
	I1210 06:29:49.878111  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:49.878364  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:50.096947  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:29:50.160267  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:50.160316  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:50.160396  401365 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:29:50.163553  401365 out.go:179] * Enabled addons: 
	I1210 06:29:50.167218  401365 addons.go:530] duration metric: took 1m39.789022145s for enable addons: enabled=[]
	I1210 06:29:50.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:29:50.377718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:50.378070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:50.877691  401365 type.go:168] "Request Body" body=""
	I1210 06:29:50.877785  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:50.878103  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:51.378394  401365 type.go:168] "Request Body" body=""
	I1210 06:29:51.378472  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:51.378746  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:51.378813  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:51.878588  401365 type.go:168] "Request Body" body=""
	I1210 06:29:51.878669  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:51.878981  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:52.377564  401365 type.go:168] "Request Body" body=""
	I1210 06:29:52.377654  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:52.378002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:52.878385  401365 type.go:168] "Request Body" body=""
	I1210 06:29:52.878456  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:52.878735  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:53.378623  401365 type.go:168] "Request Body" body=""
	I1210 06:29:53.378696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:53.379007  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:53.379062  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:53.877727  401365 type.go:168] "Request Body" body=""
	I1210 06:29:53.877818  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:53.878163  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:54.377608  401365 type.go:168] "Request Body" body=""
	I1210 06:29:54.377697  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:54.378015  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:54.877713  401365 type.go:168] "Request Body" body=""
	I1210 06:29:54.877810  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:54.878175  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:55.377895  401365 type.go:168] "Request Body" body=""
	I1210 06:29:55.377968  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:55.378309  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:55.877991  401365 type.go:168] "Request Body" body=""
	I1210 06:29:55.878064  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:55.878416  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:55.878476  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:56.378216  401365 type.go:168] "Request Body" body=""
	I1210 06:29:56.378295  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:56.378666  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:56.878479  401365 type.go:168] "Request Body" body=""
	I1210 06:29:56.878557  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:56.878903  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:57.378397  401365 type.go:168] "Request Body" body=""
	I1210 06:29:57.378477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:57.378742  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:57.878382  401365 type.go:168] "Request Body" body=""
	I1210 06:29:57.878456  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:57.878755  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:57.878801  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:58.378559  401365 type.go:168] "Request Body" body=""
	I1210 06:29:58.378645  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:58.378936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:58.877600  401365 type.go:168] "Request Body" body=""
	I1210 06:29:58.877684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:58.877957  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:59.377641  401365 type.go:168] "Request Body" body=""
	I1210 06:29:59.377718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:59.378043  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:59.878036  401365 type.go:168] "Request Body" body=""
	I1210 06:29:59.878111  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:59.878453  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:00.403040  401365 type.go:168] "Request Body" body=""
	I1210 06:30:00.403489  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:00.403971  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:00.404065  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:00.877628  401365 type.go:168] "Request Body" body=""
	I1210 06:30:00.877715  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:00.878111  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:01.378405  401365 type.go:168] "Request Body" body=""
	I1210 06:30:01.378490  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:01.378858  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:01.878587  401365 type.go:168] "Request Body" body=""
	I1210 06:30:01.878670  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:01.879048  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:02.377809  401365 type.go:168] "Request Body" body=""
	I1210 06:30:02.377884  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:02.378218  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:02.877618  401365 type.go:168] "Request Body" body=""
	I1210 06:30:02.877691  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:02.877969  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:02.878012  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:03.377736  401365 type.go:168] "Request Body" body=""
	I1210 06:30:03.377819  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:03.378180  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:03.877919  401365 type.go:168] "Request Body" body=""
	I1210 06:30:03.878016  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:03.878393  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:04.378222  401365 type.go:168] "Request Body" body=""
	I1210 06:30:04.378313  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:04.378635  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:04.878431  401365 type.go:168] "Request Body" body=""
	I1210 06:30:04.878505  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:04.879753  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1210 06:30:04.879813  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:05.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:30:05.378482  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:05.378830  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:05.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:30:05.878480  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:05.878741  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:06.378628  401365 type.go:168] "Request Body" body=""
	I1210 06:30:06.378703  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:06.379023  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:06.877690  401365 type.go:168] "Request Body" body=""
	I1210 06:30:06.877764  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:06.878119  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:07.377808  401365 type.go:168] "Request Body" body=""
	I1210 06:30:07.377895  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:07.378245  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:07.378302  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:07.877669  401365 type.go:168] "Request Body" body=""
	I1210 06:30:07.877760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:07.878098  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:08.377849  401365 type.go:168] "Request Body" body=""
	I1210 06:30:08.377929  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:08.378272  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:08.877616  401365 type.go:168] "Request Body" body=""
	I1210 06:30:08.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:08.878027  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:09.378016  401365 type.go:168] "Request Body" body=""
	I1210 06:30:09.378098  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:09.378433  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:09.378480  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:09.878345  401365 type.go:168] "Request Body" body=""
	I1210 06:30:09.878427  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:09.878801  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:10.378580  401365 type.go:168] "Request Body" body=""
	I1210 06:30:10.378704  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:10.379089  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:10.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:30:10.877745  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:10.878101  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:11.377836  401365 type.go:168] "Request Body" body=""
	I1210 06:30:11.377918  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:11.378278  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:11.877990  401365 type.go:168] "Request Body" body=""
	I1210 06:30:11.878058  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:11.878328  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:11.878370  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:12.377682  401365 type.go:168] "Request Body" body=""
	I1210 06:30:12.377762  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:12.378131  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:12.877864  401365 type.go:168] "Request Body" body=""
	I1210 06:30:12.877940  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:12.878290  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:13.377986  401365 type.go:168] "Request Body" body=""
	I1210 06:30:13.378060  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:13.378390  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:13.878180  401365 type.go:168] "Request Body" body=""
	I1210 06:30:13.878256  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:13.878586  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:13.878648  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:14.378401  401365 type.go:168] "Request Body" body=""
	I1210 06:30:14.378479  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:14.378827  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:14.878395  401365 type.go:168] "Request Body" body=""
	I1210 06:30:14.878479  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:14.878758  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:15.378543  401365 type.go:168] "Request Body" body=""
	I1210 06:30:15.378623  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:15.378945  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:15.877679  401365 type.go:168] "Request Body" body=""
	I1210 06:30:15.877759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:15.878101  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:16.377593  401365 type.go:168] "Request Body" body=""
	I1210 06:30:16.377664  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:16.377962  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:16.378009  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:16.877684  401365 type.go:168] "Request Body" body=""
	I1210 06:30:16.877760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:16.878132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:17.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:30:17.377724  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:17.378099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:17.877591  401365 type.go:168] "Request Body" body=""
	I1210 06:30:17.877703  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:17.878030  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:18.377710  401365 type.go:168] "Request Body" body=""
	I1210 06:30:18.377789  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:18.378142  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:18.378208  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:18.877756  401365 type.go:168] "Request Body" body=""
	I1210 06:30:18.877843  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:18.878196  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:19.377801  401365 type.go:168] "Request Body" body=""
	I1210 06:30:19.377880  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:19.378158  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:19.878182  401365 type.go:168] "Request Body" body=""
	I1210 06:30:19.878260  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:19.878613  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:20.378479  401365 type.go:168] "Request Body" body=""
	I1210 06:30:20.378562  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:20.378922  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:20.378995  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:20.878437  401365 type.go:168] "Request Body" body=""
	I1210 06:30:20.878515  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:20.878801  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:21.378593  401365 type.go:168] "Request Body" body=""
	I1210 06:30:21.378678  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:21.379014  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:21.877727  401365 type.go:168] "Request Body" body=""
	I1210 06:30:21.877805  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:21.878139  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:22.377644  401365 type.go:168] "Request Body" body=""
	I1210 06:30:22.377720  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:22.378036  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:22.877631  401365 type.go:168] "Request Body" body=""
	I1210 06:30:22.877708  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:22.878077  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:22.878133  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:23.377664  401365 type.go:168] "Request Body" body=""
	I1210 06:30:23.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:23.378132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:23.877609  401365 type.go:168] "Request Body" body=""
	I1210 06:30:23.877684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:23.878013  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:24.377728  401365 type.go:168] "Request Body" body=""
	I1210 06:30:24.377803  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:24.378189  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:24.878072  401365 type.go:168] "Request Body" body=""
	I1210 06:30:24.878208  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:24.878537  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:24.878592  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:25.378359  401365 type.go:168] "Request Body" body=""
	I1210 06:30:25.378444  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:25.378710  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:25.878517  401365 type.go:168] "Request Body" body=""
	I1210 06:30:25.878613  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:25.878961  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:26.377659  401365 type.go:168] "Request Body" body=""
	I1210 06:30:26.377737  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:26.378086  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:26.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:30:26.878468  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:26.878744  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:26.878791  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:27.378535  401365 type.go:168] "Request Body" body=""
	I1210 06:30:27.378611  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:27.378947  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:27.877649  401365 type.go:168] "Request Body" body=""
	I1210 06:30:27.877732  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:27.878085  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:28.377643  401365 type.go:168] "Request Body" body=""
	I1210 06:30:28.377718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:28.378171  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:28.877894  401365 type.go:168] "Request Body" body=""
	I1210 06:30:28.877977  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:28.878324  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:29.378072  401365 type.go:168] "Request Body" body=""
	I1210 06:30:29.378156  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:29.378530  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:29.378586  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:29.878257  401365 type.go:168] "Request Body" body=""
	I1210 06:30:29.878331  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:29.878620  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:30.377624  401365 type.go:168] "Request Body" body=""
	I1210 06:30:30.377719  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:30.378103  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:30.877807  401365 type.go:168] "Request Body" body=""
	I1210 06:30:30.877939  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:30.878264  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:31.377983  401365 type.go:168] "Request Body" body=""
	I1210 06:30:31.378059  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:31.378337  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:31.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:30:31.877748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:31.878104  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:31.878164  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:32.377881  401365 type.go:168] "Request Body" body=""
	I1210 06:30:32.377966  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:32.378312  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:32.877995  401365 type.go:168] "Request Body" body=""
	I1210 06:30:32.878071  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:32.878437  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:33.378235  401365 type.go:168] "Request Body" body=""
	I1210 06:30:33.378311  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:33.378664  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:33.878393  401365 type.go:168] "Request Body" body=""
	I1210 06:30:33.878477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:33.878789  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:33.878839  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:34.378385  401365 type.go:168] "Request Body" body=""
	I1210 06:30:34.378460  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:34.378856  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:34.877875  401365 type.go:168] "Request Body" body=""
	I1210 06:30:34.877953  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:34.878307  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:35.377692  401365 type.go:168] "Request Body" body=""
	I1210 06:30:35.377807  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:35.378225  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:35.877600  401365 type.go:168] "Request Body" body=""
	I1210 06:30:35.877677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:35.878020  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:36.377715  401365 type.go:168] "Request Body" body=""
	I1210 06:30:36.377791  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:36.378143  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:36.378205  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:36.877696  401365 type.go:168] "Request Body" body=""
	I1210 06:30:36.877774  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:36.878137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:37.378398  401365 type.go:168] "Request Body" body=""
	I1210 06:30:37.378477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:37.378746  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:37.878553  401365 type.go:168] "Request Body" body=""
	I1210 06:30:37.878672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:37.879091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:38.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:30:38.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:38.378109  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:38.877617  401365 type.go:168] "Request Body" body=""
	I1210 06:30:38.877690  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:38.877965  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:38.878020  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:39.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:30:39.377729  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:39.378078  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:39.877852  401365 type.go:168] "Request Body" body=""
	I1210 06:30:39.878005  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:39.878296  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:40.378297  401365 type.go:168] "Request Body" body=""
	I1210 06:30:40.378419  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:40.378746  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:40.878609  401365 type.go:168] "Request Body" body=""
	I1210 06:30:40.878695  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:40.879047  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:40.879109  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:41.377677  401365 type.go:168] "Request Body" body=""
	I1210 06:30:41.377761  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:41.378136  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:41.877816  401365 type.go:168] "Request Body" body=""
	I1210 06:30:41.877897  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:41.878247  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:42.377681  401365 type.go:168] "Request Body" body=""
	I1210 06:30:42.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:42.378160  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:42.877905  401365 type.go:168] "Request Body" body=""
	I1210 06:30:42.877988  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:42.878334  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:43.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:30:43.377686  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:43.378002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:43.378054  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:43.877669  401365 type.go:168] "Request Body" body=""
	I1210 06:30:43.877751  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:43.878145  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:44.377872  401365 type.go:168] "Request Body" body=""
	I1210 06:30:44.377977  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:44.378341  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:44.878225  401365 type.go:168] "Request Body" body=""
	I1210 06:30:44.878299  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:44.878563  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:45.378360  401365 type.go:168] "Request Body" body=""
	I1210 06:30:45.378435  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:45.378860  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:45.378937  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:45.878557  401365 type.go:168] "Request Body" body=""
	I1210 06:30:45.878640  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:45.878996  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:46.378357  401365 type.go:168] "Request Body" body=""
	I1210 06:30:46.378429  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:46.378738  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:46.878533  401365 type.go:168] "Request Body" body=""
	I1210 06:30:46.878610  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:46.878947  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:47.377691  401365 type.go:168] "Request Body" body=""
	I1210 06:30:47.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:47.378135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:47.878384  401365 type.go:168] "Request Body" body=""
	I1210 06:30:47.878498  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:47.878783  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:47.878827  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:48.378583  401365 type.go:168] "Request Body" body=""
	I1210 06:30:48.378670  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:48.379006  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:48.877596  401365 type.go:168] "Request Body" body=""
	I1210 06:30:48.877674  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:48.878023  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:49.377609  401365 type.go:168] "Request Body" body=""
	I1210 06:30:49.377685  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:49.377965  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:49.877909  401365 type.go:168] "Request Body" body=""
	I1210 06:30:49.877985  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:49.878310  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:50.378111  401365 type.go:168] "Request Body" body=""
	I1210 06:30:50.378203  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:50.378557  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:50.378619  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:50.878363  401365 type.go:168] "Request Body" body=""
	I1210 06:30:50.878438  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:50.878702  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:51.378562  401365 type.go:168] "Request Body" body=""
	I1210 06:30:51.378644  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:51.378985  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:51.877673  401365 type.go:168] "Request Body" body=""
	I1210 06:30:51.877755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:51.878129  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:52.377603  401365 type.go:168] "Request Body" body=""
	I1210 06:30:52.377672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:52.377985  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:52.877662  401365 type.go:168] "Request Body" body=""
	I1210 06:30:52.877741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:52.878113  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:52.878172  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:53.377842  401365 type.go:168] "Request Body" body=""
	I1210 06:30:53.377929  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:53.378271  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:53.877988  401365 type.go:168] "Request Body" body=""
	I1210 06:30:53.878059  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:53.878397  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:54.378229  401365 type.go:168] "Request Body" body=""
	I1210 06:30:54.378302  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:54.378632  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:54.878381  401365 type.go:168] "Request Body" body=""
	I1210 06:30:54.878460  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:54.878809  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:54.878867  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:55.378406  401365 type.go:168] "Request Body" body=""
	I1210 06:30:55.378491  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:55.378761  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:55.878532  401365 type.go:168] "Request Body" body=""
	I1210 06:30:55.878631  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:55.878979  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:56.377687  401365 type.go:168] "Request Body" body=""
	I1210 06:30:56.377765  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:56.378102  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:56.878412  401365 type.go:168] "Request Body" body=""
	I1210 06:30:56.878480  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:56.878765  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:57.378590  401365 type.go:168] "Request Body" body=""
	I1210 06:30:57.378667  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:57.379010  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:57.379066  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:57.877659  401365 type.go:168] "Request Body" body=""
	I1210 06:30:57.877736  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:57.878094  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:58.377804  401365 type.go:168] "Request Body" body=""
	I1210 06:30:58.377882  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:58.378161  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:58.877653  401365 type.go:168] "Request Body" body=""
	I1210 06:30:58.877724  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:58.878038  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:59.377678  401365 type.go:168] "Request Body" body=""
	I1210 06:30:59.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:59.378090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:59.878022  401365 type.go:168] "Request Body" body=""
	I1210 06:30:59.878105  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:59.878446  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:59.878509  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:00.377586  401365 type.go:168] "Request Body" body=""
	I1210 06:31:00.377680  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:00.378151  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:00.877892  401365 type.go:168] "Request Body" body=""
	I1210 06:31:00.877975  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:00.878336  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:01.377928  401365 type.go:168] "Request Body" body=""
	I1210 06:31:01.378000  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:01.378269  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:01.877906  401365 type.go:168] "Request Body" body=""
	I1210 06:31:01.877996  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:01.878438  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:02.377746  401365 type.go:168] "Request Body" body=""
	I1210 06:31:02.377823  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:02.378191  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:02.378256  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:02.878389  401365 type.go:168] "Request Body" body=""
	I1210 06:31:02.878461  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:02.878756  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:03.378549  401365 type.go:168] "Request Body" body=""
	I1210 06:31:03.378628  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:03.378977  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:03.877675  401365 type.go:168] "Request Body" body=""
	I1210 06:31:03.877754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:03.878104  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:04.377643  401365 type.go:168] "Request Body" body=""
	I1210 06:31:04.377719  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:04.378069  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:04.878124  401365 type.go:168] "Request Body" body=""
	I1210 06:31:04.878218  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:04.878572  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:04.878635  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:05.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:31:05.378481  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:05.378786  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:05.878376  401365 type.go:168] "Request Body" body=""
	I1210 06:31:05.878447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:05.878782  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:06.378579  401365 type.go:168] "Request Body" body=""
	I1210 06:31:06.378655  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:06.379033  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:06.877752  401365 type.go:168] "Request Body" body=""
	I1210 06:31:06.877828  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:06.878145  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:07.377614  401365 type.go:168] "Request Body" body=""
	I1210 06:31:07.377703  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:07.378053  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:07.378103  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:07.877679  401365 type.go:168] "Request Body" body=""
	I1210 06:31:07.877774  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:07.878132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:08.377692  401365 type.go:168] "Request Body" body=""
	I1210 06:31:08.377773  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:08.378135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:08.877811  401365 type.go:168] "Request Body" body=""
	I1210 06:31:08.877884  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:08.878180  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:09.377668  401365 type.go:168] "Request Body" body=""
	I1210 06:31:09.377754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:09.378101  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:09.378155  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:09.877923  401365 type.go:168] "Request Body" body=""
	I1210 06:31:09.877999  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:09.878321  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:10.378307  401365 type.go:168] "Request Body" body=""
	I1210 06:31:10.378386  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:10.378650  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:10.878423  401365 type.go:168] "Request Body" body=""
	I1210 06:31:10.878500  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:10.878869  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:11.378503  401365 type.go:168] "Request Body" body=""
	I1210 06:31:11.378584  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:11.378952  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:11.379008  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:11.878378  401365 type.go:168] "Request Body" body=""
	I1210 06:31:11.878450  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:11.878715  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:12.378514  401365 type.go:168] "Request Body" body=""
	I1210 06:31:12.378591  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:12.378905  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:12.877633  401365 type.go:168] "Request Body" body=""
	I1210 06:31:12.877711  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:12.878080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:13.378362  401365 type.go:168] "Request Body" body=""
	I1210 06:31:13.378431  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:13.378728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:13.878515  401365 type.go:168] "Request Body" body=""
	I1210 06:31:13.878592  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:13.878918  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:13.878976  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:14.377681  401365 type.go:168] "Request Body" body=""
	I1210 06:31:14.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:14.378115  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:14.878072  401365 type.go:168] "Request Body" body=""
	I1210 06:31:14.878147  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:14.878405  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:15.378262  401365 type.go:168] "Request Body" body=""
	I1210 06:31:15.378345  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:15.378686  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:15.878492  401365 type.go:168] "Request Body" body=""
	I1210 06:31:15.878569  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:15.878935  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:16.378356  401365 type.go:168] "Request Body" body=""
	I1210 06:31:16.378441  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:16.378690  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:16.378731  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:16.878535  401365 type.go:168] "Request Body" body=""
	I1210 06:31:16.878609  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:16.878944  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:17.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:31:17.377753  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:17.378118  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:17.877723  401365 type.go:168] "Request Body" body=""
	I1210 06:31:17.877797  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:17.878157  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:18.377666  401365 type.go:168] "Request Body" body=""
	I1210 06:31:18.377744  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:18.378082  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:18.877660  401365 type.go:168] "Request Body" body=""
	I1210 06:31:18.877734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:18.878083  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:18.878141  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:19.378341  401365 type.go:168] "Request Body" body=""
	I1210 06:31:19.378417  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:19.378680  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:19.878385  401365 type.go:168] "Request Body" body=""
	I1210 06:31:19.878484  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:19.878844  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:20.377537  401365 type.go:168] "Request Body" body=""
	I1210 06:31:20.377620  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:20.377967  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:20.877662  401365 type.go:168] "Request Body" body=""
	I1210 06:31:20.877748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:20.878176  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:20.878224  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:21.377644  401365 type.go:168] "Request Body" body=""
	I1210 06:31:21.377723  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:21.378064  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:21.877799  401365 type.go:168] "Request Body" body=""
	I1210 06:31:21.877892  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:21.878256  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:22.377991  401365 type.go:168] "Request Body" body=""
	I1210 06:31:22.378069  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:22.378361  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:22.877671  401365 type.go:168] "Request Body" body=""
	I1210 06:31:22.877765  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:22.878106  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:23.377677  401365 type.go:168] "Request Body" body=""
	I1210 06:31:23.377772  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:23.378156  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:23.378228  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:23.877603  401365 type.go:168] "Request Body" body=""
	I1210 06:31:23.877676  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:23.877972  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:24.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:31:24.377753  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:24.378120  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:24.877983  401365 type.go:168] "Request Body" body=""
	I1210 06:31:24.878078  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:24.878441  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:25.378227  401365 type.go:168] "Request Body" body=""
	I1210 06:31:25.378296  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:25.378552  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:25.378598  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:25.878364  401365 type.go:168] "Request Body" body=""
	I1210 06:31:25.878444  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:25.878791  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:26.377537  401365 type.go:168] "Request Body" body=""
	I1210 06:31:26.377611  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:26.377959  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:26.878388  401365 type.go:168] "Request Body" body=""
	I1210 06:31:26.878456  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:26.878725  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:27.378513  401365 type.go:168] "Request Body" body=""
	I1210 06:31:27.378591  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:27.378938  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:27.378993  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:27.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:31:27.877739  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:27.878092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:28.378425  401365 type.go:168] "Request Body" body=""
	I1210 06:31:28.378506  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:28.378821  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:28.877546  401365 type.go:168] "Request Body" body=""
	I1210 06:31:28.877631  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:28.878002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:29.377725  401365 type.go:168] "Request Body" body=""
	I1210 06:31:29.377802  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:29.378176  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:29.878060  401365 type.go:168] "Request Body" body=""
	I1210 06:31:29.878133  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:29.878404  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:29.878448  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:30.378433  401365 type.go:168] "Request Body" body=""
	I1210 06:31:30.378508  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:30.378874  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:30.877621  401365 type.go:168] "Request Body" body=""
	I1210 06:31:30.877699  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:30.878072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:31.377633  401365 type.go:168] "Request Body" body=""
	I1210 06:31:31.377704  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:31.378026  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:31.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:31:31.877748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:31.878092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:32.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:31:32.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:32.378156  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:32.378215  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:32.878393  401365 type.go:168] "Request Body" body=""
	I1210 06:31:32.878461  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:32.878721  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:33.378508  401365 type.go:168] "Request Body" body=""
	I1210 06:31:33.378585  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:33.379111  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:33.877686  401365 type.go:168] "Request Body" body=""
	I1210 06:31:33.877775  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:33.878146  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:34.377671  401365 type.go:168] "Request Body" body=""
	I1210 06:31:34.377743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:34.378043  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:34.877949  401365 type.go:168] "Request Body" body=""
	I1210 06:31:34.878028  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:34.878374  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:34.878438  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:35.378226  401365 type.go:168] "Request Body" body=""
	I1210 06:31:35.378306  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:35.378649  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:35.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:31:35.878471  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:35.878748  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:36.378551  401365 type.go:168] "Request Body" body=""
	I1210 06:31:36.378631  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:36.378948  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:36.877548  401365 type.go:168] "Request Body" body=""
	I1210 06:31:36.877626  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:36.877995  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:37.378404  401365 type.go:168] "Request Body" body=""
	I1210 06:31:37.378472  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:37.378739  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:37.378783  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:37.878571  401365 type.go:168] "Request Body" body=""
	I1210 06:31:37.878646  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:37.878969  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:38.378416  401365 type.go:168] "Request Body" body=""
	I1210 06:31:38.378491  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:38.378834  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:38.878423  401365 type.go:168] "Request Body" body=""
	I1210 06:31:38.878499  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:38.878770  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:39.378611  401365 type.go:168] "Request Body" body=""
	I1210 06:31:39.378694  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:39.379044  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:39.379105  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:39.878018  401365 type.go:168] "Request Body" body=""
	I1210 06:31:39.878102  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:39.878461  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:40.378264  401365 type.go:168] "Request Body" body=""
	I1210 06:31:40.378348  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:40.378617  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:40.878427  401365 type.go:168] "Request Body" body=""
	I1210 06:31:40.878510  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:40.878851  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:41.377583  401365 type.go:168] "Request Body" body=""
	I1210 06:31:41.377658  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:41.377991  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:41.877560  401365 type.go:168] "Request Body" body=""
	I1210 06:31:41.877633  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:41.877903  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:41.877948  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:42.377649  401365 type.go:168] "Request Body" body=""
	I1210 06:31:42.377727  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:42.378093  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:42.877664  401365 type.go:168] "Request Body" body=""
	I1210 06:31:42.877739  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:42.878032  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:43.378436  401365 type.go:168] "Request Body" body=""
	I1210 06:31:43.378507  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:43.378831  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:43.878454  401365 type.go:168] "Request Body" body=""
	I1210 06:31:43.878546  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:43.878900  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:43.878962  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:44.378527  401365 type.go:168] "Request Body" body=""
	I1210 06:31:44.378608  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:44.378911  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:44.877852  401365 type.go:168] "Request Body" body=""
	I1210 06:31:44.877944  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:44.878230  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:45.377683  401365 type.go:168] "Request Body" body=""
	I1210 06:31:45.377757  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:45.378232  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:45.877964  401365 type.go:168] "Request Body" body=""
	I1210 06:31:45.878060  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:45.878412  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:46.378182  401365 type.go:168] "Request Body" body=""
	I1210 06:31:46.378267  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.378573  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:46.378621  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:46.878427  401365 type.go:168] "Request Body" body=""
	I1210 06:31:46.878510  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.878849  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.378554  401365 type.go:168] "Request Body" body=""
	I1210 06:31:47.378637  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.379010  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.878381  401365 type.go:168] "Request Body" body=""
	I1210 06:31:47.878453  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.878751  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:48.378580  401365 type.go:168] "Request Body" body=""
	I1210 06:31:48.378660  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.378984  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:48.379037  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:48.877565  401365 type.go:168] "Request Body" body=""
	I1210 06:31:48.877642  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.877972  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.378371  401365 type.go:168] "Request Body" body=""
	I1210 06:31:49.378448  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.378712  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.878386  401365 type.go:168] "Request Body" body=""
	I1210 06:31:49.878461  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.878790  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:50.377587  401365 type.go:168] "Request Body" body=""
	I1210 06:31:50.377673  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.378035  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:50.878395  401365 type.go:168] "Request Body" body=""
	I1210 06:31:50.878469  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.878754  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:50.878808  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:51.378548  401365 type.go:168] "Request Body" body=""
	I1210 06:31:51.378627  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.378976  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:51.877595  401365 type.go:168] "Request Body" body=""
	I1210 06:31:51.877677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.878064  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:52.378358  401365 type.go:168] "Request Body" body=""
	I1210 06:31:52.378433  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.378695  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:52.878474  401365 type.go:168] "Request Body" body=""
	I1210 06:31:52.878551  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.878895  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:52.878957  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:53.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:31:53.377721  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.378047  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:53.877607  401365 type.go:168] "Request Body" body=""
	I1210 06:31:53.877682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.878066  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:31:54.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.378069  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.877984  401365 type.go:168] "Request Body" body=""
	I1210 06:31:54.878068  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.878451  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:55.378227  401365 type.go:168] "Request Body" body=""
	I1210 06:31:55.378305  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.378567  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:55.378612  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:55.878449  401365 type.go:168] "Request Body" body=""
	I1210 06:31:55.878524  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.878878  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.377607  401365 type.go:168] "Request Body" body=""
	I1210 06:31:56.377684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.378031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:31:56.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.878731  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:57.378523  401365 type.go:168] "Request Body" body=""
	I1210 06:31:57.378605  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.378963  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:57.379024  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:57.878422  401365 type.go:168] "Request Body" body=""
	I1210 06:31:57.878496  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.878837  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:58.378369  401365 type.go:168] "Request Body" body=""
	I1210 06:31:58.378450  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.378724  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:58.878516  401365 type.go:168] "Request Body" body=""
	I1210 06:31:58.878590  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.878936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:31:59.377756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.378079  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.878003  401365 type.go:168] "Request Body" body=""
	I1210 06:31:59.878079  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.878346  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:59.878388  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:00.378620  401365 type.go:168] "Request Body" body=""
	I1210 06:32:00.378720  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.379187  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:00.877753  401365 type.go:168] "Request Body" body=""
	I1210 06:32:00.877830  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.878187  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.377623  401365 type.go:168] "Request Body" body=""
	I1210 06:32:01.377694  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.377960  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.877717  401365 type.go:168] "Request Body" body=""
	I1210 06:32:01.877791  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.878137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:02.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:32:02.377750  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.378099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:02.378152  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:02.878416  401365 type.go:168] "Request Body" body=""
	I1210 06:32:02.878493  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.878764  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.378615  401365 type.go:168] "Request Body" body=""
	I1210 06:32:03.378694  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.379016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.877719  401365 type.go:168] "Request Body" body=""
	I1210 06:32:03.877801  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.878168  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:04.377604  401365 type.go:168] "Request Body" body=""
	I1210 06:32:04.377682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.378022  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:04.878029  401365 type.go:168] "Request Body" body=""
	I1210 06:32:04.878113  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.878426  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:04.878477  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:05.378217  401365 type.go:168] "Request Body" body=""
	I1210 06:32:05.378293  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.378623  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:05.878242  401365 type.go:168] "Request Body" body=""
	I1210 06:32:05.878313  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.878586  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:06.378446  401365 type.go:168] "Request Body" body=""
	I1210 06:32:06.378528  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.378861  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:06.877578  401365 type.go:168] "Request Body" body=""
	I1210 06:32:06.877651  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.877991  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:07.378348  401365 type.go:168] "Request Body" body=""
	I1210 06:32:07.378430  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.378696  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:07.378747  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:07.878485  401365 type.go:168] "Request Body" body=""
	I1210 06:32:07.878566  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.878891  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.377683  401365 type.go:168] "Request Body" body=""
	I1210 06:32:08.377758  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.378068  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.877617  401365 type.go:168] "Request Body" body=""
	I1210 06:32:08.877686  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.877996  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:09.377667  401365 type.go:168] "Request Body" body=""
	I1210 06:32:09.377749  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.378070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:09.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:32:09.878526  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.878847  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:09.878895  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:10.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:32:10.377695  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.377992  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:10.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:32:10.877747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.878107  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:11.377752  401365 type.go:168] "Request Body" body=""
	I1210 06:32:11.377832  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.378194  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:11.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:32:11.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.878721  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:12.378536  401365 type.go:168] "Request Body" body=""
	I1210 06:32:12.378609  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.379037  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:12.379094  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:12.877634  401365 type.go:168] "Request Body" body=""
	I1210 06:32:12.877718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.878024  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:13.377615  401365 type.go:168] "Request Body" body=""
	I1210 06:32:13.377684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.377949  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:13.877636  401365 type.go:168] "Request Body" body=""
	I1210 06:32:13.877713  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.878091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.377642  401365 type.go:168] "Request Body" body=""
	I1210 06:32:14.377717  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.378074  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.877991  401365 type.go:168] "Request Body" body=""
	I1210 06:32:14.878073  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.878405  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:14.878468  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:15.378244  401365 type.go:168] "Request Body" body=""
	I1210 06:32:15.378316  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.378669  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:15.878506  401365 type.go:168] "Request Body" body=""
	I1210 06:32:15.878598  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.878952  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:16.378402  401365 type.go:168] "Request Body" body=""
	I1210 06:32:16.378473  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.378735  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:16.878581  401365 type.go:168] "Request Body" body=""
	I1210 06:32:16.878668  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.879029  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:16.879085  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:17.377664  401365 type.go:168] "Request Body" body=""
	I1210 06:32:17.377738  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.378065  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:17.877603  401365 type.go:168] "Request Body" body=""
	I1210 06:32:17.877677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.877943  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:18.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:32:18.377754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.378106  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:18.877827  401365 type.go:168] "Request Body" body=""
	I1210 06:32:18.877917  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.878299  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:19.377981  401365 type.go:168] "Request Body" body=""
	I1210 06:32:19.378062  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.378390  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:19.378451  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:19.878242  401365 type.go:168] "Request Body" body=""
	I1210 06:32:19.878318  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.878664  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.377555  401365 type.go:168] "Request Body" body=""
	I1210 06:32:20.377633  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.377966  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.877592  401365 type.go:168] "Request Body" body=""
	I1210 06:32:20.877663  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.878022  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:21.377596  401365 type.go:168] "Request Body" body=""
	I1210 06:32:21.377677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.377971  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:21.877671  401365 type.go:168] "Request Body" body=""
	I1210 06:32:21.877747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.878078  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:21.878135  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:22.377603  401365 type.go:168] "Request Body" body=""
	I1210 06:32:22.377681  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.377998  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:22.877713  401365 type.go:168] "Request Body" body=""
	I1210 06:32:22.877789  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.878146  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.378586  401365 type.go:168] "Request Body" body=""
	I1210 06:32:23.378663  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.379023  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.877627  401365 type.go:168] "Request Body" body=""
	I1210 06:32:23.877698  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.878027  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:24.377671  401365 type.go:168] "Request Body" body=""
	I1210 06:32:24.377755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.378140  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:24.378210  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:24.878158  401365 type.go:168] "Request Body" body=""
	I1210 06:32:24.878240  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.878611  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:25.378254  401365 type.go:168] "Request Body" body=""
	I1210 06:32:25.378329  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.378601  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:25.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:32:25.878460  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.878767  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:26.378460  401365 type.go:168] "Request Body" body=""
	I1210 06:32:26.378534  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.378923  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:26.378977  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:26.878379  401365 type.go:168] "Request Body" body=""
	I1210 06:32:26.878453  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.878804  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:27.378593  401365 type.go:168] "Request Body" body=""
	I1210 06:32:27.378674  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.379034  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:27.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:32:27.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.878091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.378401  401365 type.go:168] "Request Body" body=""
	I1210 06:32:28.378470  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.378735  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.878509  401365 type.go:168] "Request Body" body=""
	I1210 06:32:28.878592  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.878904  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:28.878959  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:29.377676  401365 type.go:168] "Request Body" body=""
	I1210 06:32:29.377758  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.378099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:29.877932  401365 type.go:168] "Request Body" body=""
	I1210 06:32:29.878011  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.878331  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.378437  401365 type.go:168] "Request Body" body=""
	I1210 06:32:30.378520  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.378881  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.877601  401365 type.go:168] "Request Body" body=""
	I1210 06:32:30.877679  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.877997  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:31.378413  401365 type.go:168] "Request Body" body=""
	I1210 06:32:31.378485  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.378800  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:31.378859  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:31.877545  401365 type.go:168] "Request Body" body=""
	I1210 06:32:31.877620  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.877962  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.377685  401365 type.go:168] "Request Body" body=""
	I1210 06:32:32.377765  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.378115  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.878382  401365 type.go:168] "Request Body" body=""
	I1210 06:32:32.878458  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.878718  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:33.378533  401365 type.go:168] "Request Body" body=""
	I1210 06:32:33.378613  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.378973  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:33.379031  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:33.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:32:33.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.878099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:34.377573  401365 type.go:168] "Request Body" body=""
	I1210 06:32:34.377644  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.377911  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:34.877902  401365 type.go:168] "Request Body" body=""
	I1210 06:32:34.877978  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.878339  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.378057  401365 type.go:168] "Request Body" body=""
	I1210 06:32:35.378143  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.378506  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.878224  401365 type.go:168] "Request Body" body=""
	I1210 06:32:35.878295  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.878562  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:35.878604  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:36.378404  401365 type.go:168] "Request Body" body=""
	I1210 06:32:36.378487  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.378840  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:36.877571  401365 type.go:168] "Request Body" body=""
	I1210 06:32:36.877653  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.877994  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.378346  401365 type.go:168] "Request Body" body=""
	I1210 06:32:37.378421  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.378684  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.878461  401365 type.go:168] "Request Body" body=""
	I1210 06:32:37.878543  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.878890  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:37.878952  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:38.378573  401365 type.go:168] "Request Body" body=""
	I1210 06:32:38.378654  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.378951  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:38.878358  401365 type.go:168] "Request Body" body=""
	I1210 06:32:38.878428  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.878691  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.378473  401365 type.go:168] "Request Body" body=""
	I1210 06:32:39.378552  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.378939  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.877654  401365 type.go:168] "Request Body" body=""
	I1210 06:32:39.877738  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.878074  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:40.377853  401365 type.go:168] "Request Body" body=""
	I1210 06:32:40.377926  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.378227  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:40.378275  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:40.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:32:40.877741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.878110  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.377673  401365 type.go:168] "Request Body" body=""
	I1210 06:32:41.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.378080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.878456  401365 type.go:168] "Request Body" body=""
	I1210 06:32:41.878528  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.878849  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:32:42.377701  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.378097  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.877683  401365 type.go:168] "Request Body" body=""
	I1210 06:32:42.877759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.878128  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:42.878186  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:43.378375  401365 type.go:168] "Request Body" body=""
	I1210 06:32:43.378451  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.378720  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:43.878495  401365 type.go:168] "Request Body" body=""
	I1210 06:32:43.878566  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.878911  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.378610  401365 type.go:168] "Request Body" body=""
	I1210 06:32:44.378700  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.379090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.877962  401365 type.go:168] "Request Body" body=""
	I1210 06:32:44.878030  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.878300  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:44.878343  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:45.377682  401365 type.go:168] "Request Body" body=""
	I1210 06:32:45.377763  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.378114  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:45.877818  401365 type.go:168] "Request Body" body=""
	I1210 06:32:45.877892  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.878234  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.377583  401365 type.go:168] "Request Body" body=""
	I1210 06:32:46.377660  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.377917  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:32:46.877751  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.878148  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:47.377793  401365 type.go:168] "Request Body" body=""
	I1210 06:32:47.377870  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.378225  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:47.378277  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:47.877609  401365 type.go:168] "Request Body" body=""
	I1210 06:32:47.877689  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.877999  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:48.377617  401365 type.go:168] "Request Body" body=""
	I1210 06:32:48.377714  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.378121  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:48.877709  401365 type.go:168] "Request Body" body=""
	I1210 06:32:48.877795  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.878141  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.377627  401365 type.go:168] "Request Body" body=""
	I1210 06:32:49.377713  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.378005  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.878006  401365 type.go:168] "Request Body" body=""
	I1210 06:32:49.878085  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.878433  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:49.878488  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:50.378322  401365 type.go:168] "Request Body" body=""
	I1210 06:32:50.378398  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.378718  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:50.878347  401365 type.go:168] "Request Body" body=""
	I1210 06:32:50.878420  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.878687  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.378558  401365 type.go:168] "Request Body" body=""
	I1210 06:32:51.378630  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.378973  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:32:51.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.878061  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:52.377607  401365 type.go:168] "Request Body" body=""
	I1210 06:32:52.377682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.377965  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:52.378014  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:52.877660  401365 type.go:168] "Request Body" body=""
	I1210 06:32:52.877736  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.878070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:53.377675  401365 type.go:168] "Request Body" body=""
	I1210 06:32:53.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.378128  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:53.878388  401365 type.go:168] "Request Body" body=""
	I1210 06:32:53.878462  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.878725  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:54.378466  401365 type.go:168] "Request Body" body=""
	I1210 06:32:54.378536  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.378857  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:54.378913  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:54.877680  401365 type.go:168] "Request Body" body=""
	I1210 06:32:54.877756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.878119  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:55.378458  401365 type.go:168] "Request Body" body=""
	I1210 06:32:55.378526  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.378782  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:55.878544  401365 type.go:168] "Request Body" body=""
	I1210 06:32:55.878626  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.878951  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.377665  401365 type.go:168] "Request Body" body=""
	I1210 06:32:56.377741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.378096  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.878361  401365 type.go:168] "Request Body" body=""
	I1210 06:32:56.878436  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.878736  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:56.878785  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:57.377545  401365 type.go:168] "Request Body" body=""
	I1210 06:32:57.377621  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.377956  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:57.877652  401365 type.go:168] "Request Body" body=""
	I1210 06:32:57.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.878070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.377628  401365 type.go:168] "Request Body" body=""
	I1210 06:32:58.377706  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.378022  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.877657  401365 type.go:168] "Request Body" body=""
	I1210 06:32:58.877735  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.878058  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:59.377658  401365 type.go:168] "Request Body" body=""
	I1210 06:32:59.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.378092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:59.378152  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:59.877990  401365 type.go:168] "Request Body" body=""
	I1210 06:32:59.878106  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.878540  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:00.378642  401365 type.go:168] "Request Body" body=""
	I1210 06:33:00.378734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.379157  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:00.877676  401365 type.go:168] "Request Body" body=""
	I1210 06:33:00.877756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.878108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:01.377615  401365 type.go:168] "Request Body" body=""
	I1210 06:33:01.377687  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.377982  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:01.877579  401365 type.go:168] "Request Body" body=""
	I1210 06:33:01.877659  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.877980  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:01.878035  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:02.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:33:02.377769  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.378112  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:02.877622  401365 type.go:168] "Request Body" body=""
	I1210 06:33:02.877696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.878019  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.378424  401365 type.go:168] "Request Body" body=""
	I1210 06:33:03.378503  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.378851  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.877594  401365 type.go:168] "Request Body" body=""
	I1210 06:33:03.877673  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.878031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:03.878095  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:04.377623  401365 type.go:168] "Request Body" body=""
	I1210 06:33:04.377695  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.378016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:04.878008  401365 type.go:168] "Request Body" body=""
	I1210 06:33:04.878082  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.878402  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.378189  401365 type.go:168] "Request Body" body=""
	I1210 06:33:05.378264  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.378599  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.878376  401365 type.go:168] "Request Body" body=""
	I1210 06:33:05.878455  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.878734  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:05.878779  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:06.378572  401365 type.go:168] "Request Body" body=""
	I1210 06:33:06.378660  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.379002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:06.877676  401365 type.go:168] "Request Body" body=""
	I1210 06:33:06.877754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.878110  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.378442  401365 type.go:168] "Request Body" body=""
	I1210 06:33:07.378521  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.378800  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.877549  401365 type.go:168] "Request Body" body=""
	I1210 06:33:07.877629  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.878000  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:08.377709  401365 type.go:168] "Request Body" body=""
	I1210 06:33:08.377785  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.378149  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:08.378206  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:08.877866  401365 type.go:168] "Request Body" body=""
	I1210 06:33:08.877938  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.878266  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.377997  401365 type.go:168] "Request Body" body=""
	I1210 06:33:09.378074  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.378430  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.878278  401365 type.go:168] "Request Body" body=""
	I1210 06:33:09.878362  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.878709  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:10.378535  401365 type.go:168] "Request Body" body=""
	I1210 06:33:10.378614  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.378892  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:10.378949  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:10.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:10.877696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.878045  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:11.377636  401365 type.go:168] "Request Body" body=""
	I1210 06:33:11.377715  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.378069  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:11.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:33:11.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.878741  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:12.378537  401365 type.go:168] "Request Body" body=""
	I1210 06:33:12.378621  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.378959  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:12.379018  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:12.877685  401365 type.go:168] "Request Body" body=""
	I1210 06:33:12.877767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.878108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:13.377595  401365 type.go:168] "Request Body" body=""
	I1210 06:33:13.377667  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.377991  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:13.877696  401365 type.go:168] "Request Body" body=""
	I1210 06:33:13.877788  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.878233  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.377670  401365 type.go:168] "Request Body" body=""
	I1210 06:33:14.377745  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.378062  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.878087  401365 type.go:168] "Request Body" body=""
	I1210 06:33:14.878167  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.878437  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:14.878481  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:15.378338  401365 type.go:168] "Request Body" body=""
	I1210 06:33:15.378427  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.378799  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:15.877556  401365 type.go:168] "Request Body" body=""
	I1210 06:33:15.877630  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.878034  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:16.378366  401365 type.go:168] "Request Body" body=""
	I1210 06:33:16.378435  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.378773  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:16.878569  401365 type.go:168] "Request Body" body=""
	I1210 06:33:16.878643  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.879012  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:16.879074  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:17.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:33:17.377747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.378122  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:17.877820  401365 type.go:168] "Request Body" body=""
	I1210 06:33:17.877897  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.878175  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:18.377654  401365 type.go:168] "Request Body" body=""
	I1210 06:33:18.377727  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.378073  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:18.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:33:18.877754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.878137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:19.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:33:19.377682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.377977  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:19.378029  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:19.877848  401365 type.go:168] "Request Body" body=""
	I1210 06:33:19.877930  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.878248  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:20.378064  401365 type.go:168] "Request Body" body=""
	I1210 06:33:20.378150  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.378561  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:20.878476  401365 type.go:168] "Request Body" body=""
	I1210 06:33:20.878552  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.878835  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:21.377583  401365 type.go:168] "Request Body" body=""
	I1210 06:33:21.377658  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.378029  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:21.378094  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:21.877680  401365 type.go:168] "Request Body" body=""
	I1210 06:33:21.877755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.878122  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:22.378420  401365 type.go:168] "Request Body" body=""
	I1210 06:33:22.378487  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.378808  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:22.877547  401365 type.go:168] "Request Body" body=""
	I1210 06:33:22.877625  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.877980  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:23.377731  401365 type.go:168] "Request Body" body=""
	I1210 06:33:23.377812  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.378164  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:23.378221  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:23.877756  401365 type.go:168] "Request Body" body=""
	I1210 06:33:23.877825  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.878080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:24.377759  401365 type.go:168] "Request Body" body=""
	I1210 06:33:24.377846  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.378207  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:24.878036  401365 type.go:168] "Request Body" body=""
	I1210 06:33:24.878119  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.878474  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:25.378280  401365 type.go:168] "Request Body" body=""
	I1210 06:33:25.378375  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.378683  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:25.378744  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:25.878089  401365 type.go:168] "Request Body" body=""
	I1210 06:33:25.878190  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.878571  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:26.378247  401365 type.go:168] "Request Body" body=""
	I1210 06:33:26.378325  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.378653  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:26.878389  401365 type.go:168] "Request Body" body=""
	I1210 06:33:26.878457  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.878720  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:27.378526  401365 type.go:168] "Request Body" body=""
	I1210 06:33:27.378607  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.378943  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:27.379002  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:27.877690  401365 type.go:168] "Request Body" body=""
	I1210 06:33:27.877775  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.878121  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:28.377561  401365 type.go:168] "Request Body" body=""
	I1210 06:33:28.377635  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.377959  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:28.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:33:28.877750  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.878089  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.378437  401365 type.go:168] "Request Body" body=""
	I1210 06:33:29.378518  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.378867  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:29.877685  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.877995  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:29.878058  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:30.377631  401365 type.go:168] "Request Body" body=""
	I1210 06:33:30.377707  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.378042  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:30.877750  401365 type.go:168] "Request Body" body=""
	I1210 06:33:30.877827  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.878132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:31.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:33:31.377692  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.377951  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:31.877635  401365 type.go:168] "Request Body" body=""
	I1210 06:33:31.877717  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.878049  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:31.878116  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:32.377678  401365 type.go:168] "Request Body" body=""
	I1210 06:33:32.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.378103  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:32.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:32.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.877995  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:33.377756  401365 type.go:168] "Request Body" body=""
	I1210 06:33:33.377840  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.378198  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:33.877915  401365 type.go:168] "Request Body" body=""
	I1210 06:33:33.877993  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.878332  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:33.878392  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:34.377635  401365 type.go:168] "Request Body" body=""
	I1210 06:33:34.377727  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.378085  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:34.878096  401365 type.go:168] "Request Body" body=""
	I1210 06:33:34.878177  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.878550  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:35.378207  401365 type.go:168] "Request Body" body=""
	I1210 06:33:35.378280  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.378622  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:35.878407  401365 type.go:168] "Request Body" body=""
	I1210 06:33:35.878481  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.878777  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:35.878833  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:36.378544  401365 type.go:168] "Request Body" body=""
	I1210 06:33:36.378618  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.378979  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:36.877667  401365 type.go:168] "Request Body" body=""
	I1210 06:33:36.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.878064  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.377603  401365 type.go:168] "Request Body" body=""
	I1210 06:33:37.377674  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.377946  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.877690  401365 type.go:168] "Request Body" body=""
	I1210 06:33:37.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.878181  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:38.377888  401365 type.go:168] "Request Body" body=""
	I1210 06:33:38.377973  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.378298  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:38.378347  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:38.877651  401365 type.go:168] "Request Body" body=""
	I1210 06:33:38.877756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.878041  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:33:39.377755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.378094  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.877930  401365 type.go:168] "Request Body" body=""
	I1210 06:33:39.878008  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.878344  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:40.378300  401365 type.go:168] "Request Body" body=""
	I1210 06:33:40.378366  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.378615  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:40.378657  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:40.878469  401365 type.go:168] "Request Body" body=""
	I1210 06:33:40.878546  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.878897  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:41.378609  401365 type.go:168] "Request Body" body=""
	I1210 06:33:41.378684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.379020  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:41.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:41.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.878041  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:42.377673  401365 type.go:168] "Request Body" body=""
	I1210 06:33:42.377747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.378116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:42.877854  401365 type.go:168] "Request Body" body=""
	I1210 06:33:42.877940  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.878285  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:42.878351  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:43.377678  401365 type.go:168] "Request Body" body=""
	I1210 06:33:43.377746  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.378043  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:43.877664  401365 type.go:168] "Request Body" body=""
	I1210 06:33:43.877741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.878068  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:44.377646  401365 type.go:168] "Request Body" body=""
	I1210 06:33:44.377729  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.378109  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:44.877931  401365 type.go:168] "Request Body" body=""
	I1210 06:33:44.878000  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.878273  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:45.377683  401365 type.go:168] "Request Body" body=""
	I1210 06:33:45.377768  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.378162  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:45.378230  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:45.877646  401365 type.go:168] "Request Body" body=""
	I1210 06:33:45.877726  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.878079  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:46.378365  401365 type.go:168] "Request Body" body=""
	I1210 06:33:46.378443  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.378778  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:46.878592  401365 type.go:168] "Request Body" body=""
	I1210 06:33:46.878667  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.879016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.377612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:47.377693  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.378037  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.878404  401365 type.go:168] "Request Body" body=""
	I1210 06:33:47.878481  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.878791  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:47.878833  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:48.378603  401365 type.go:168] "Request Body" body=""
	I1210 06:33:48.378679  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.379038  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:48.877634  401365 type.go:168] "Request Body" body=""
	I1210 06:33:48.877710  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.878058  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:49.377585  401365 type.go:168] "Request Body" body=""
	I1210 06:33:49.377661  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.377929  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:49.877952  401365 type.go:168] "Request Body" body=""
	I1210 06:33:49.878030  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.878370  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:50.378433  401365 type.go:168] "Request Body" body=""
	I1210 06:33:50.378512  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.378851  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:50.378908  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:50.878409  401365 type.go:168] "Request Body" body=""
	I1210 06:33:50.878484  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.878745  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:51.378528  401365 type.go:168] "Request Body" body=""
	I1210 06:33:51.378608  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.378930  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:51.877680  401365 type.go:168] "Request Body" body=""
	I1210 06:33:51.877772  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.878121  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:33:52.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.378042  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.877736  401365 type.go:168] "Request Body" body=""
	I1210 06:33:52.877859  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.878200  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:52.878263  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:53.377750  401365 type.go:168] "Request Body" body=""
	I1210 06:33:53.377829  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.378164  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:53.878375  401365 type.go:168] "Request Body" body=""
	I1210 06:33:53.878447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.878711  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.378552  401365 type.go:168] "Request Body" body=""
	I1210 06:33:54.378627  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.378978  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.877937  401365 type.go:168] "Request Body" body=""
	I1210 06:33:54.878016  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.878372  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:54.878426  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:55.377557  401365 type.go:168] "Request Body" body=""
	I1210 06:33:55.377627  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.377890  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:55.877581  401365 type.go:168] "Request Body" body=""
	I1210 06:33:55.877661  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.878044  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:56.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:33:56.377687  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.378031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:56.878382  401365 type.go:168] "Request Body" body=""
	I1210 06:33:56.878463  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.878747  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:56.878792  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:57.378563  401365 type.go:168] "Request Body" body=""
	I1210 06:33:57.378655  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.379048  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:57.878429  401365 type.go:168] "Request Body" body=""
	I1210 06:33:57.878505  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.878838  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:58.378383  401365 type.go:168] "Request Body" body=""
	I1210 06:33:58.378457  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.378729  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:58.878537  401365 type.go:168] "Request Body" body=""
	I1210 06:33:58.878615  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.878961  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:58.879020  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:59.377698  401365 type.go:168] "Request Body" body=""
	I1210 06:33:59.377777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.378091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:59.877943  401365 type.go:168] "Request Body" body=""
	I1210 06:33:59.878015  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.878285  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:00.388459  401365 type.go:168] "Request Body" body=""
	I1210 06:34:00.388551  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.388936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:00.877633  401365 type.go:168] "Request Body" body=""
	I1210 06:34:00.877711  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.878072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:01.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:34:01.377692  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.377964  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:01.378006  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:01.877703  401365 type.go:168] "Request Body" body=""
	I1210 06:34:01.877777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.878116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.377805  401365 type.go:168] "Request Body" body=""
	I1210 06:34:02.377886  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.378243  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.877793  401365 type.go:168] "Request Body" body=""
	I1210 06:34:02.877861  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.878116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:03.377724  401365 type.go:168] "Request Body" body=""
	I1210 06:34:03.377819  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.378176  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:03.378248  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:03.877926  401365 type.go:168] "Request Body" body=""
	I1210 06:34:03.877998  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.878340  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.378166  401365 type.go:168] "Request Body" body=""
	I1210 06:34:04.378243  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.378539  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.878398  401365 type.go:168] "Request Body" body=""
	I1210 06:34:04.878479  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.878809  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:05.378551  401365 type.go:168] "Request Body" body=""
	I1210 06:34:05.378630  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.379127  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:05.379181  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:05.877595  401365 type.go:168] "Request Body" body=""
	I1210 06:34:05.877669  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.877928  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.377667  401365 type.go:168] "Request Body" body=""
	I1210 06:34:06.377742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.378094  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.877685  401365 type.go:168] "Request Body" body=""
	I1210 06:34:06.877764  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.878112  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.378383  401365 type.go:168] "Request Body" body=""
	I1210 06:34:07.378451  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.378722  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.878478  401365 type.go:168] "Request Body" body=""
	I1210 06:34:07.878553  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.878918  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:07.878972  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:08.378592  401365 type.go:168] "Request Body" body=""
	I1210 06:34:08.378675  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.379031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:08.877609  401365 type.go:168] "Request Body" body=""
	I1210 06:34:08.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.877968  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.377659  401365 type.go:168] "Request Body" body=""
	I1210 06:34:09.377734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.378072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.877922  401365 type.go:168] "Request Body" body=""
	I1210 06:34:09.878005  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.878441  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:10.378514  401365 type.go:168] "Request Body" body=""
	I1210 06:34:10.378590  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.378890  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:10.378934  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:10.877619  401365 type.go:168] "Request Body" body=""
	I1210 06:34:10.877709  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.878026  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:11.377616  401365 type.go:168] "Request Body" body=""
	I1210 06:34:11.377679  401365 node_ready.go:38] duration metric: took 6m0.000247895s for node "functional-253997" to be "Ready" ...
	I1210 06:34:11.380832  401365 out.go:203] 
	W1210 06:34:11.383623  401365 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 06:34:11.383641  401365 out.go:285] * 
	* 
	W1210 06:34:11.385783  401365 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:34:11.388549  401365 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-253997 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m7.371958247s for "functional-253997" cluster.
I1210 06:34:12.017567  364265 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-253997
helpers_test.go:244: (dbg) docker inspect functional-253997:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	        "Created": "2025-12-10T06:19:33.832297734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 395175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:19:33.914699563Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hosts",
	        "LogPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7-json.log",
	        "Name": "/functional-253997",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-253997:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-253997",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	                "LowerDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-253997",
	                "Source": "/var/lib/docker/volumes/functional-253997/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-253997",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-253997",
	                "name.minikube.sigs.k8s.io": "functional-253997",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "484c42edbd556e17e2c5a836571d3275bc9cbd157b70c100e08a5b73322a62e6",
	            "SandboxKey": "/var/run/docker/netns/484c42edbd55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-253997": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ba:52:c5:6e:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fc5aadc020d5981cfdab05726de722ae81d653cbc30b66014568069657370fe7",
	                    "EndpointID": "9ecd4882ea5c9fe5581b68ad15a0a2f450e34777aee426fcc158008c81076e5a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-253997",
	                        "256e059cdf1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997: exit status 2 (335.646184ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-253997 logs -n 25: (1.07645117s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-013831 ssh sudo cat /usr/share/ca-certificates/364265.pem                                                                            │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                        │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /etc/ssl/certs/3642652.pem                                                                                       │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /usr/share/ca-certificates/3642652.pem                                                                           │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /etc/test/nested/copy/364265/hosts                                                                               │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                        │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ cp             │ functional-013831 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                              │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh -n functional-013831 sudo cat /home/docker/cp-test.txt                                                                    │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ cp             │ functional-013831 cp functional-013831:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1440438441/001/cp-test.txt                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh -n functional-013831 sudo cat /home/docker/cp-test.txt                                                                    │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ cp             │ functional-013831 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                       │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls --format short --alsologtostderr                                                                                     │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls --format yaml --alsologtostderr                                                                                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh -n functional-013831 sudo cat /tmp/does/not/exist/cp-test.txt                                                             │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh pgrep buildkitd                                                                                                           │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ image          │ functional-013831 image ls --format json --alsologtostderr                                                                                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image build -t localhost/my-image:functional-013831 testdata/build --alsologtostderr                                          │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls --format table --alsologtostderr                                                                                     │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls                                                                                                                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ delete         │ -p functional-013831                                                                                                                            │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start          │ -p functional-253997 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ start          │ -p functional-253997 --alsologtostderr -v=8                                                                                                     │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:28 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:28:04
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:28:04.696682  401365 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:28:04.696859  401365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:28:04.696892  401365 out.go:374] Setting ErrFile to fd 2...
	I1210 06:28:04.696914  401365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:28:04.697215  401365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:28:04.697662  401365 out.go:368] Setting JSON to false
	I1210 06:28:04.698567  401365 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11437,"bootTime":1765336648,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:28:04.698673  401365 start.go:143] virtualization:  
	I1210 06:28:04.702443  401365 out.go:179] * [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:28:04.705481  401365 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:28:04.705615  401365 notify.go:221] Checking for updates...
	I1210 06:28:04.711086  401365 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:28:04.713917  401365 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:04.716867  401365 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:28:04.719925  401365 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:28:04.722835  401365 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:28:04.726336  401365 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:28:04.726469  401365 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:28:04.754166  401365 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:28:04.754279  401365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:28:04.810645  401365 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:28:04.801435563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:28:04.810756  401365 docker.go:319] overlay module found
	I1210 06:28:04.813864  401365 out.go:179] * Using the docker driver based on existing profile
	I1210 06:28:04.816769  401365 start.go:309] selected driver: docker
	I1210 06:28:04.816791  401365 start.go:927] validating driver "docker" against &{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:28:04.816907  401365 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:28:04.817028  401365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:28:04.870143  401365 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:28:04.860525891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:28:04.870593  401365 cni.go:84] Creating CNI manager for ""
	I1210 06:28:04.870644  401365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:28:04.870692  401365 start.go:353] cluster config:
	{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:28:04.873854  401365 out.go:179] * Starting "functional-253997" primary control-plane node in "functional-253997" cluster
	I1210 06:28:04.876935  401365 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:28:04.879860  401365 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:28:04.882747  401365 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:28:04.882931  401365 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:28:04.906679  401365 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:28:04.906698  401365 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:28:04.939349  401365 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 06:28:05.106989  401365 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 06:28:05.107216  401365 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/config.json ...
	I1210 06:28:05.107505  401365 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:28:05.107566  401365 start.go:360] acquireMachinesLock for functional-253997: {Name:mkd4a204596bf14d7530a6c5c103756527acdf26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.107643  401365 start.go:364] duration metric: took 39.278µs to acquireMachinesLock for "functional-253997"
	I1210 06:28:05.107681  401365 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:28:05.107701  401365 fix.go:54] fixHost starting: 
	I1210 06:28:05.107821  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:05.108032  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:05.134635  401365 fix.go:112] recreateIfNeeded on functional-253997: state=Running err=<nil>
	W1210 06:28:05.134664  401365 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:28:05.138161  401365 out.go:252] * Updating the running docker "functional-253997" container ...
	I1210 06:28:05.138204  401365 machine.go:94] provisionDockerMachine start ...
	I1210 06:28:05.138290  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.156912  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:05.157271  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:05.157282  401365 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:28:05.272681  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:05.312543  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:28:05.312568  401365 ubuntu.go:182] provisioning hostname "functional-253997"
	I1210 06:28:05.312643  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.337102  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:05.337416  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:05.337433  401365 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-253997 && echo "functional-253997" | sudo tee /etc/hostname
	I1210 06:28:05.435781  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:05.503700  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:28:05.503808  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.525010  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:05.525371  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:05.525395  401365 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-253997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-253997/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-253997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:28:05.596990  401365 cache.go:107] acquiring lock: {Name:mk9268471d41d450748d4b0133d1a043378776f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597093  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:28:05.597107  401365 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 135.879µs
	I1210 06:28:05.597123  401365 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597148  401365 cache.go:107] acquiring lock: {Name:mk8250dc655f821bf2674ed77a7683798ede4f4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597196  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:28:05.597205  401365 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 71.098µs
	I1210 06:28:05.597212  401365 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597224  401365 cache.go:107] acquiring lock: {Name:mk1c0334e51772e4f6b3429cc05b91ec19d54b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597256  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:28:05.597264  401365 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 41.773µs
	I1210 06:28:05.597271  401365 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597286  401365 cache.go:107] acquiring lock: {Name:mk41aaba55a3296a937cf380d44dc9920df69546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597313  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:28:05.597325  401365 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 45.342µs
	I1210 06:28:05.597331  401365 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597347  401365 cache.go:107] acquiring lock: {Name:mkc2831727d74e5fadb1c00bd89f0236b385200e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597380  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:28:05.597390  401365 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 49.009µs
	I1210 06:28:05.597395  401365 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:28:05.597404  401365 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597432  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:28:05.597441  401365 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 38.597µs
	I1210 06:28:05.597447  401365 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:28:05.597457  401365 cache.go:107] acquiring lock: {Name:mk7e052900e41419dc78d3311f27db508a8f64b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597487  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:28:05.597494  401365 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.163µs
	I1210 06:28:05.597499  401365 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:28:05.597517  401365 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597571  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:28:05.597584  401365 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.023µs
	I1210 06:28:05.597591  401365 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:28:05.597598  401365 cache.go:87] Successfully saved all images to host disk.
	I1210 06:28:05.681682  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:28:05.681708  401365 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-362392/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-362392/.minikube}
	I1210 06:28:05.681741  401365 ubuntu.go:190] setting up certificates
	I1210 06:28:05.681752  401365 provision.go:84] configureAuth start
	I1210 06:28:05.681819  401365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:28:05.699808  401365 provision.go:143] copyHostCerts
	I1210 06:28:05.699863  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 06:28:05.699905  401365 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem, removing ...
	I1210 06:28:05.699919  401365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 06:28:05.699992  401365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem (1078 bytes)
	I1210 06:28:05.700081  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 06:28:05.700104  401365 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem, removing ...
	I1210 06:28:05.700113  401365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 06:28:05.700142  401365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem (1123 bytes)
	I1210 06:28:05.700188  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 06:28:05.700207  401365 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem, removing ...
	I1210 06:28:05.700218  401365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 06:28:05.700242  401365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem (1675 bytes)
	I1210 06:28:05.700300  401365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem org=jenkins.functional-253997 san=[127.0.0.1 192.168.49.2 functional-253997 localhost minikube]
	I1210 06:28:05.936274  401365 provision.go:177] copyRemoteCerts
	I1210 06:28:05.936350  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:28:05.936418  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.954560  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.065031  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 06:28:06.065092  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:28:06.082556  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 06:28:06.082620  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:28:06.101057  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 06:28:06.101135  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:28:06.119676  401365 provision.go:87] duration metric: took 437.892883ms to configureAuth
	I1210 06:28:06.119777  401365 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:28:06.119980  401365 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:28:06.120085  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.137920  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:06.138235  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:06.138256  401365 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:28:06.452845  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:28:06.452929  401365 machine.go:97] duration metric: took 1.314715304s to provisionDockerMachine
	I1210 06:28:06.452956  401365 start.go:293] postStartSetup for "functional-253997" (driver="docker")
	I1210 06:28:06.452990  401365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:28:06.453063  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:28:06.453144  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.470784  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.577269  401365 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:28:06.580692  401365 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 06:28:06.580715  401365 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 06:28:06.580720  401365 command_runner.go:130] > VERSION_ID="12"
	I1210 06:28:06.580725  401365 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 06:28:06.580730  401365 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 06:28:06.580768  401365 command_runner.go:130] > ID=debian
	I1210 06:28:06.580780  401365 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 06:28:06.580785  401365 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 06:28:06.580791  401365 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 06:28:06.580887  401365 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:28:06.580933  401365 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:28:06.580952  401365 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/addons for local assets ...
	I1210 06:28:06.581012  401365 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/files for local assets ...
	I1210 06:28:06.581098  401365 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> 3642652.pem in /etc/ssl/certs
	I1210 06:28:06.581111  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> /etc/ssl/certs/3642652.pem
	I1210 06:28:06.581203  401365 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts -> hosts in /etc/test/nested/copy/364265
	I1210 06:28:06.581211  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts -> /etc/test/nested/copy/364265/hosts
	I1210 06:28:06.581307  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364265
	I1210 06:28:06.588834  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:28:06.607350  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts --> /etc/test/nested/copy/364265/hosts (40 bytes)
	I1210 06:28:06.625111  401365 start.go:296] duration metric: took 172.118023ms for postStartSetup
	I1210 06:28:06.625251  401365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:28:06.625310  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.643314  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.746089  401365 command_runner.go:130] > 11%
	I1210 06:28:06.746641  401365 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:28:06.751190  401365 command_runner.go:130] > 174G
	I1210 06:28:06.751596  401365 fix.go:56] duration metric: took 1.643890859s for fixHost
	I1210 06:28:06.751620  401365 start.go:83] releasing machines lock for "functional-253997", held for 1.643948944s
	I1210 06:28:06.751695  401365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:28:06.769599  401365 ssh_runner.go:195] Run: cat /version.json
	I1210 06:28:06.769653  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.769923  401365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:28:06.769973  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.794205  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.801527  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.995023  401365 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 06:28:06.995129  401365 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1210 06:28:06.995269  401365 ssh_runner.go:195] Run: systemctl --version
	I1210 06:28:07.001581  401365 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 06:28:07.001629  401365 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 06:28:07.002099  401365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:28:07.048284  401365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 06:28:07.052994  401365 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 06:28:07.053661  401365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:28:07.053769  401365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:28:07.062754  401365 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:28:07.062818  401365 start.go:496] detecting cgroup driver to use...
	I1210 06:28:07.062869  401365 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:28:07.062946  401365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:28:07.079107  401365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:28:07.094803  401365 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:28:07.094958  401365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:28:07.114470  401365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:28:07.128193  401365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:28:07.258424  401365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:28:07.374265  401365 docker.go:234] disabling docker service ...
	I1210 06:28:07.374339  401365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:28:07.389285  401365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:28:07.403201  401365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:28:07.521904  401365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:28:07.641023  401365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:28:07.653771  401365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:28:07.666535  401365 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1210 06:28:07.667719  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:07.817082  401365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:28:07.817158  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.826426  401365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:28:07.826509  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.835611  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.844530  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.853511  401365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:28:07.861378  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.870726  401365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.879012  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.888039  401365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:28:07.894740  401365 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 06:28:07.895767  401365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:28:07.903878  401365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:28:08.028500  401365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:28:08.203883  401365 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:28:08.204004  401365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:28:08.207826  401365 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1210 06:28:08.207850  401365 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 06:28:08.207858  401365 command_runner.go:130] > Device: 0,72	Inode: 1753        Links: 1
	I1210 06:28:08.207864  401365 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:28:08.207869  401365 command_runner.go:130] > Access: 2025-12-10 06:28:08.143329752 +0000
	I1210 06:28:08.207875  401365 command_runner.go:130] > Modify: 2025-12-10 06:28:08.143329752 +0000
	I1210 06:28:08.207879  401365 command_runner.go:130] > Change: 2025-12-10 06:28:08.143329752 +0000
	I1210 06:28:08.207883  401365 command_runner.go:130] >  Birth: -
	I1210 06:28:08.207920  401365 start.go:564] Will wait 60s for crictl version
	I1210 06:28:08.207972  401365 ssh_runner.go:195] Run: which crictl
	I1210 06:28:08.211603  401365 command_runner.go:130] > /usr/local/bin/crictl
	I1210 06:28:08.211673  401365 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:28:08.233344  401365 command_runner.go:130] > Version:  0.1.0
	I1210 06:28:08.233366  401365 command_runner.go:130] > RuntimeName:  cri-o
	I1210 06:28:08.233371  401365 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1210 06:28:08.233486  401365 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 06:28:08.235784  401365 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:28:08.235868  401365 ssh_runner.go:195] Run: crio --version
	I1210 06:28:08.263554  401365 command_runner.go:130] > crio version 1.34.3
	I1210 06:28:08.263582  401365 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 06:28:08.263590  401365 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 06:28:08.263598  401365 command_runner.go:130] >    GitTreeState:   dirty
	I1210 06:28:08.263603  401365 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 06:28:08.263609  401365 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 06:28:08.263614  401365 command_runner.go:130] >    Compiler:       gc
	I1210 06:28:08.263618  401365 command_runner.go:130] >    Platform:       linux/arm64
	I1210 06:28:08.263625  401365 command_runner.go:130] >    Linkmode:       static
	I1210 06:28:08.263631  401365 command_runner.go:130] >    BuildTags:
	I1210 06:28:08.263635  401365 command_runner.go:130] >      static
	I1210 06:28:08.263641  401365 command_runner.go:130] >      netgo
	I1210 06:28:08.263644  401365 command_runner.go:130] >      osusergo
	I1210 06:28:08.263649  401365 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 06:28:08.263658  401365 command_runner.go:130] >      seccomp
	I1210 06:28:08.263662  401365 command_runner.go:130] >      apparmor
	I1210 06:28:08.263665  401365 command_runner.go:130] >      selinux
	I1210 06:28:08.263673  401365 command_runner.go:130] >    LDFlags:          unknown
	I1210 06:28:08.263678  401365 command_runner.go:130] >    SeccompEnabled:   true
	I1210 06:28:08.263686  401365 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 06:28:08.265277  401365 ssh_runner.go:195] Run: crio --version
	I1210 06:28:08.292854  401365 command_runner.go:130] > crio version 1.34.3
	I1210 06:28:08.292877  401365 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 06:28:08.292884  401365 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 06:28:08.292894  401365 command_runner.go:130] >    GitTreeState:   dirty
	I1210 06:28:08.292899  401365 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 06:28:08.292903  401365 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 06:28:08.292909  401365 command_runner.go:130] >    Compiler:       gc
	I1210 06:28:08.292914  401365 command_runner.go:130] >    Platform:       linux/arm64
	I1210 06:28:08.292918  401365 command_runner.go:130] >    Linkmode:       static
	I1210 06:28:08.292921  401365 command_runner.go:130] >    BuildTags:
	I1210 06:28:08.292925  401365 command_runner.go:130] >      static
	I1210 06:28:08.292929  401365 command_runner.go:130] >      netgo
	I1210 06:28:08.292932  401365 command_runner.go:130] >      osusergo
	I1210 06:28:08.292936  401365 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 06:28:08.292939  401365 command_runner.go:130] >      seccomp
	I1210 06:28:08.292943  401365 command_runner.go:130] >      apparmor
	I1210 06:28:08.292947  401365 command_runner.go:130] >      selinux
	I1210 06:28:08.292951  401365 command_runner.go:130] >    LDFlags:          unknown
	I1210 06:28:08.292955  401365 command_runner.go:130] >    SeccompEnabled:   true
	I1210 06:28:08.292959  401365 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 06:28:08.297960  401365 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 06:28:08.300955  401365 cli_runner.go:164] Run: docker network inspect functional-253997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:28:08.316701  401365 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:28:08.320890  401365 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 06:28:08.321107  401365 kubeadm.go:884] updating cluster {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:28:08.321383  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:08.467539  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:08.630219  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:08.778675  401365 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:28:08.778770  401365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:28:08.809702  401365 command_runner.go:130] > {
	I1210 06:28:08.809721  401365 command_runner.go:130] >   "images":  [
	I1210 06:28:08.809725  401365 command_runner.go:130] >     {
	I1210 06:28:08.809734  401365 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1210 06:28:08.809739  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809744  401365 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 06:28:08.809748  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809753  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809762  401365 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1210 06:28:08.809765  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809770  401365 command_runner.go:130] >       "size":  "29035622",
	I1210 06:28:08.809784  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.809789  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809792  401365 command_runner.go:130] >     },
	I1210 06:28:08.809795  401365 command_runner.go:130] >     {
	I1210 06:28:08.809802  401365 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 06:28:08.809806  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809812  401365 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 06:28:08.809815  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809819  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809827  401365 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1210 06:28:08.809830  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809834  401365 command_runner.go:130] >       "size":  "74488375",
	I1210 06:28:08.809839  401365 command_runner.go:130] >       "username":  "nonroot",
	I1210 06:28:08.809843  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809846  401365 command_runner.go:130] >     },
	I1210 06:28:08.809850  401365 command_runner.go:130] >     {
	I1210 06:28:08.809856  401365 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1210 06:28:08.809860  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809865  401365 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1210 06:28:08.809868  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809872  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809882  401365 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:c711b5adb8fed0c7e2a62a6c327fd0f8486f90b93c1ebd0ba1e790b930373aae"
	I1210 06:28:08.809885  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809889  401365 command_runner.go:130] >       "size":  "60849030",
	I1210 06:28:08.809893  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.809897  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.809900  401365 command_runner.go:130] >       },
	I1210 06:28:08.809904  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.809908  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809911  401365 command_runner.go:130] >     },
	I1210 06:28:08.809915  401365 command_runner.go:130] >     {
	I1210 06:28:08.809921  401365 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1210 06:28:08.809925  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809934  401365 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1210 06:28:08.809938  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809941  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809949  401365 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:112cff968330968b8d8cc75dd9f232b5b048067a2151629e56b80ff7af621b72"
	I1210 06:28:08.809954  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809958  401365 command_runner.go:130] >       "size":  "85012778",
	I1210 06:28:08.809961  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.809965  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.809968  401365 command_runner.go:130] >       },
	I1210 06:28:08.809973  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.809977  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809980  401365 command_runner.go:130] >     },
	I1210 06:28:08.809983  401365 command_runner.go:130] >     {
	I1210 06:28:08.809989  401365 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1210 06:28:08.809994  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809999  401365 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1210 06:28:08.810002  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810006  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810014  401365 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:89a8e28214a1c4b4631281e37feaf54299e7510d43c5445d4adc88575390f71e"
	I1210 06:28:08.810017  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810021  401365 command_runner.go:130] >       "size":  "72167568",
	I1210 06:28:08.810030  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.810035  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.810038  401365 command_runner.go:130] >       },
	I1210 06:28:08.810042  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810046  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.810049  401365 command_runner.go:130] >     },
	I1210 06:28:08.810052  401365 command_runner.go:130] >     {
	I1210 06:28:08.810058  401365 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1210 06:28:08.810062  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.810068  401365 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1210 06:28:08.810072  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810076  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810086  401365 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:01f8409e5c04cbb256f29dc118ff184a24b9cbb97c7f6d3bd462333b366566ca"
	I1210 06:28:08.810089  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810093  401365 command_runner.go:130] >       "size":  "74105636",
	I1210 06:28:08.810097  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810101  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.810104  401365 command_runner.go:130] >     },
	I1210 06:28:08.810107  401365 command_runner.go:130] >     {
	I1210 06:28:08.810114  401365 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1210 06:28:08.810117  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.810127  401365 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1210 06:28:08.810131  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810134  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810144  401365 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9a2a4c91f5fdaf1a3c5e839f425cc7461b9e24d5f0816c207638df25ade3eda9"
	I1210 06:28:08.810147  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810151  401365 command_runner.go:130] >       "size":  "49819792",
	I1210 06:28:08.810154  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.810158  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.810160  401365 command_runner.go:130] >       },
	I1210 06:28:08.810165  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810169  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.810172  401365 command_runner.go:130] >     },
	I1210 06:28:08.810175  401365 command_runner.go:130] >     {
	I1210 06:28:08.810181  401365 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 06:28:08.810185  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.810189  401365 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 06:28:08.810192  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810196  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810203  401365 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1210 06:28:08.810206  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810210  401365 command_runner.go:130] >       "size":  "517328",
	I1210 06:28:08.810213  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.810217  401365 command_runner.go:130] >         "value":  "65535"
	I1210 06:28:08.810220  401365 command_runner.go:130] >       },
	I1210 06:28:08.810228  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810232  401365 command_runner.go:130] >       "pinned":  true
	I1210 06:28:08.810234  401365 command_runner.go:130] >     }
	I1210 06:28:08.810237  401365 command_runner.go:130] >   ]
	I1210 06:28:08.810240  401365 command_runner.go:130] > }
	I1210 06:28:08.812152  401365 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:28:08.812177  401365 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:28:08.812185  401365 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1210 06:28:08.812284  401365 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-253997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:28:08.812367  401365 ssh_runner.go:195] Run: crio config
	I1210 06:28:08.860605  401365 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1210 06:28:08.860628  401365 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1210 06:28:08.860635  401365 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1210 06:28:08.860638  401365 command_runner.go:130] > #
	I1210 06:28:08.860654  401365 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1210 06:28:08.860661  401365 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1210 06:28:08.860668  401365 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1210 06:28:08.860677  401365 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1210 06:28:08.860680  401365 command_runner.go:130] > # reload'.
	I1210 06:28:08.860687  401365 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1210 06:28:08.860694  401365 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1210 06:28:08.860700  401365 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1210 06:28:08.860706  401365 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1210 06:28:08.860709  401365 command_runner.go:130] > [crio]
	I1210 06:28:08.860716  401365 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1210 06:28:08.860721  401365 command_runner.go:130] > # containers images, in this directory.
	I1210 06:28:08.860730  401365 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1210 06:28:08.860737  401365 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1210 06:28:08.860742  401365 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1210 06:28:08.860760  401365 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1210 06:28:08.860811  401365 command_runner.go:130] > # imagestore = ""
	I1210 06:28:08.860819  401365 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1210 06:28:08.860826  401365 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1210 06:28:08.860837  401365 command_runner.go:130] > # storage_driver = "overlay"
	I1210 06:28:08.860843  401365 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1210 06:28:08.860850  401365 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1210 06:28:08.860853  401365 command_runner.go:130] > # storage_option = [
	I1210 06:28:08.860857  401365 command_runner.go:130] > # ]
	I1210 06:28:08.860864  401365 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1210 06:28:08.860870  401365 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1210 06:28:08.860874  401365 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1210 06:28:08.860880  401365 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1210 06:28:08.860886  401365 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1210 06:28:08.860890  401365 command_runner.go:130] > # always happen on a node reboot
	I1210 06:28:08.860894  401365 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1210 06:28:08.860905  401365 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1210 06:28:08.860911  401365 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1210 06:28:08.860918  401365 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1210 06:28:08.860922  401365 command_runner.go:130] > # version_file_persist = ""
	I1210 06:28:08.860930  401365 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1210 06:28:08.860938  401365 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1210 06:28:08.860941  401365 command_runner.go:130] > # internal_wipe = true
	I1210 06:28:08.860950  401365 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1210 06:28:08.860955  401365 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1210 06:28:08.860959  401365 command_runner.go:130] > # internal_repair = true
	I1210 06:28:08.860964  401365 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1210 06:28:08.860971  401365 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1210 06:28:08.860976  401365 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1210 06:28:08.860981  401365 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1210 06:28:08.860987  401365 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1210 06:28:08.860991  401365 command_runner.go:130] > [crio.api]
	I1210 06:28:08.860997  401365 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1210 06:28:08.861001  401365 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1210 06:28:08.861006  401365 command_runner.go:130] > # IP address on which the stream server will listen.
	I1210 06:28:08.861010  401365 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1210 06:28:08.861017  401365 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1210 06:28:08.861026  401365 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1210 06:28:08.861030  401365 command_runner.go:130] > # stream_port = "0"
	I1210 06:28:08.861035  401365 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1210 06:28:08.861040  401365 command_runner.go:130] > # stream_enable_tls = false
	I1210 06:28:08.861046  401365 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1210 06:28:08.861050  401365 command_runner.go:130] > # stream_idle_timeout = ""
	I1210 06:28:08.861056  401365 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1210 06:28:08.861062  401365 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1210 06:28:08.861066  401365 command_runner.go:130] > # stream_tls_cert = ""
	I1210 06:28:08.861072  401365 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1210 06:28:08.861077  401365 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1210 06:28:08.861081  401365 command_runner.go:130] > # stream_tls_key = ""
	I1210 06:28:08.861087  401365 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1210 06:28:08.861093  401365 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1210 06:28:08.861097  401365 command_runner.go:130] > # automatically pick up the changes.
	I1210 06:28:08.861446  401365 command_runner.go:130] > # stream_tls_ca = ""
	I1210 06:28:08.861478  401365 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 06:28:08.861569  401365 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1210 06:28:08.861581  401365 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 06:28:08.861586  401365 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1210 06:28:08.861593  401365 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1210 06:28:08.861599  401365 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1210 06:28:08.861602  401365 command_runner.go:130] > [crio.runtime]
	I1210 06:28:08.861609  401365 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1210 06:28:08.861614  401365 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1210 06:28:08.861628  401365 command_runner.go:130] > # "nofile=1024:2048"
	I1210 06:28:08.861634  401365 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1210 06:28:08.861638  401365 command_runner.go:130] > # default_ulimits = [
	I1210 06:28:08.861653  401365 command_runner.go:130] > # ]
	I1210 06:28:08.861660  401365 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1210 06:28:08.861663  401365 command_runner.go:130] > # no_pivot = false
	I1210 06:28:08.861669  401365 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1210 06:28:08.861675  401365 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1210 06:28:08.861681  401365 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1210 06:28:08.861687  401365 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1210 06:28:08.861696  401365 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1210 06:28:08.861703  401365 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 06:28:08.861707  401365 command_runner.go:130] > # conmon = ""
	I1210 06:28:08.861711  401365 command_runner.go:130] > # Cgroup setting for conmon
	I1210 06:28:08.861718  401365 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1210 06:28:08.861722  401365 command_runner.go:130] > conmon_cgroup = "pod"
	I1210 06:28:08.861728  401365 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1210 06:28:08.861733  401365 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1210 06:28:08.861740  401365 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 06:28:08.861744  401365 command_runner.go:130] > # conmon_env = [
	I1210 06:28:08.861747  401365 command_runner.go:130] > # ]
	I1210 06:28:08.861753  401365 command_runner.go:130] > # Additional environment variables to set for all the
	I1210 06:28:08.861758  401365 command_runner.go:130] > # containers. These are overridden if set in the
	I1210 06:28:08.861764  401365 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1210 06:28:08.861768  401365 command_runner.go:130] > # default_env = [
	I1210 06:28:08.861771  401365 command_runner.go:130] > # ]
	I1210 06:28:08.861787  401365 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1210 06:28:08.861795  401365 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1210 06:28:08.861799  401365 command_runner.go:130] > # selinux = false
	I1210 06:28:08.861809  401365 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1210 06:28:08.861817  401365 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1210 06:28:08.861823  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862101  401365 command_runner.go:130] > # seccomp_profile = ""
	I1210 06:28:08.862113  401365 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1210 06:28:08.862119  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862201  401365 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1210 06:28:08.862211  401365 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1210 06:28:08.862225  401365 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1210 06:28:08.862232  401365 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1210 06:28:08.862239  401365 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1210 06:28:08.862244  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862248  401365 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1210 06:28:08.862254  401365 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1210 06:28:08.862259  401365 command_runner.go:130] > # the cgroup blockio controller.
	I1210 06:28:08.862263  401365 command_runner.go:130] > # blockio_config_file = ""
	I1210 06:28:08.862273  401365 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1210 06:28:08.862283  401365 command_runner.go:130] > # blockio parameters.
	I1210 06:28:08.862294  401365 command_runner.go:130] > # blockio_reload = false
	I1210 06:28:08.862301  401365 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1210 06:28:08.862304  401365 command_runner.go:130] > # irqbalance daemon.
	I1210 06:28:08.862310  401365 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1210 06:28:08.862316  401365 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1210 06:28:08.862323  401365 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1210 06:28:08.862330  401365 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1210 06:28:08.862336  401365 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1210 06:28:08.862342  401365 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1210 06:28:08.862347  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862351  401365 command_runner.go:130] > # rdt_config_file = ""
	I1210 06:28:08.862356  401365 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1210 06:28:08.862384  401365 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1210 06:28:08.862391  401365 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1210 06:28:08.862666  401365 command_runner.go:130] > # separate_pull_cgroup = ""
	I1210 06:28:08.862678  401365 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1210 06:28:08.862685  401365 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1210 06:28:08.862689  401365 command_runner.go:130] > # will be added.
	I1210 06:28:08.862693  401365 command_runner.go:130] > # default_capabilities = [
	I1210 06:28:08.862777  401365 command_runner.go:130] > # 	"CHOWN",
	I1210 06:28:08.862786  401365 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1210 06:28:08.862797  401365 command_runner.go:130] > # 	"FSETID",
	I1210 06:28:08.862802  401365 command_runner.go:130] > # 	"FOWNER",
	I1210 06:28:08.862806  401365 command_runner.go:130] > # 	"SETGID",
	I1210 06:28:08.862809  401365 command_runner.go:130] > # 	"SETUID",
	I1210 06:28:08.862838  401365 command_runner.go:130] > # 	"SETPCAP",
	I1210 06:28:08.862844  401365 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1210 06:28:08.862847  401365 command_runner.go:130] > # 	"KILL",
	I1210 06:28:08.862850  401365 command_runner.go:130] > # ]
	I1210 06:28:08.862858  401365 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1210 06:28:08.862865  401365 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1210 06:28:08.863095  401365 command_runner.go:130] > # add_inheritable_capabilities = false
	I1210 06:28:08.863106  401365 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1210 06:28:08.863112  401365 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 06:28:08.863116  401365 command_runner.go:130] > default_sysctls = [
	I1210 06:28:08.863203  401365 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1210 06:28:08.863243  401365 command_runner.go:130] > ]
	I1210 06:28:08.863252  401365 command_runner.go:130] > # List of devices on the host that a
	I1210 06:28:08.863259  401365 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1210 06:28:08.863263  401365 command_runner.go:130] > # allowed_devices = [
	I1210 06:28:08.863314  401365 command_runner.go:130] > # 	"/dev/fuse",
	I1210 06:28:08.863326  401365 command_runner.go:130] > # 	"/dev/net/tun",
	I1210 06:28:08.863333  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863338  401365 command_runner.go:130] > # List of additional devices. specified as
	I1210 06:28:08.863345  401365 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1210 06:28:08.863351  401365 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1210 06:28:08.863357  401365 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 06:28:08.863361  401365 command_runner.go:130] > # additional_devices = [
	I1210 06:28:08.863363  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863368  401365 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1210 06:28:08.863372  401365 command_runner.go:130] > # cdi_spec_dirs = [
	I1210 06:28:08.863376  401365 command_runner.go:130] > # 	"/etc/cdi",
	I1210 06:28:08.863379  401365 command_runner.go:130] > # 	"/var/run/cdi",
	I1210 06:28:08.863382  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863388  401365 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1210 06:28:08.863394  401365 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1210 06:28:08.863398  401365 command_runner.go:130] > # Defaults to false.
	I1210 06:28:08.863403  401365 command_runner.go:130] > # device_ownership_from_security_context = false
	I1210 06:28:08.863410  401365 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1210 06:28:08.863415  401365 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1210 06:28:08.863419  401365 command_runner.go:130] > # hooks_dir = [
	I1210 06:28:08.863604  401365 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1210 06:28:08.863612  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863618  401365 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1210 06:28:08.863625  401365 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1210 06:28:08.863630  401365 command_runner.go:130] > # its default mounts from the following two files:
	I1210 06:28:08.863633  401365 command_runner.go:130] > #
	I1210 06:28:08.863640  401365 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1210 06:28:08.863646  401365 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1210 06:28:08.863652  401365 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1210 06:28:08.863655  401365 command_runner.go:130] > #
	I1210 06:28:08.863661  401365 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1210 06:28:08.863676  401365 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1210 06:28:08.863683  401365 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1210 06:28:08.863687  401365 command_runner.go:130] > #      only add mounts it finds in this file.
	I1210 06:28:08.863690  401365 command_runner.go:130] > #
	I1210 06:28:08.863719  401365 command_runner.go:130] > # default_mounts_file = ""
	I1210 06:28:08.863725  401365 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1210 06:28:08.863732  401365 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1210 06:28:08.863736  401365 command_runner.go:130] > # pids_limit = -1
	I1210 06:28:08.863742  401365 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1210 06:28:08.863748  401365 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1210 06:28:08.863761  401365 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1210 06:28:08.863771  401365 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1210 06:28:08.863775  401365 command_runner.go:130] > # log_size_max = -1
	I1210 06:28:08.863782  401365 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1210 06:28:08.863786  401365 command_runner.go:130] > # log_to_journald = false
	I1210 06:28:08.863792  401365 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1210 06:28:08.863974  401365 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1210 06:28:08.863984  401365 command_runner.go:130] > # Path to directory for container attach sockets.
	I1210 06:28:08.863990  401365 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1210 06:28:08.863996  401365 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1210 06:28:08.864082  401365 command_runner.go:130] > # bind_mount_prefix = ""
	I1210 06:28:08.864098  401365 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1210 06:28:08.864139  401365 command_runner.go:130] > # read_only = false
	I1210 06:28:08.864149  401365 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1210 06:28:08.864156  401365 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1210 06:28:08.864159  401365 command_runner.go:130] > # live configuration reload.
	I1210 06:28:08.864163  401365 command_runner.go:130] > # log_level = "info"
	I1210 06:28:08.864169  401365 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1210 06:28:08.864174  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.864178  401365 command_runner.go:130] > # log_filter = ""
	I1210 06:28:08.864183  401365 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1210 06:28:08.864190  401365 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1210 06:28:08.864193  401365 command_runner.go:130] > # separated by comma.
	I1210 06:28:08.864208  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864211  401365 command_runner.go:130] > # uid_mappings = ""
	I1210 06:28:08.864218  401365 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1210 06:28:08.864224  401365 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1210 06:28:08.864228  401365 command_runner.go:130] > # separated by comma.
	I1210 06:28:08.864236  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864440  401365 command_runner.go:130] > # gid_mappings = ""
	I1210 06:28:08.864451  401365 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1210 06:28:08.864458  401365 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 06:28:08.864465  401365 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 06:28:08.864473  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864477  401365 command_runner.go:130] > # minimum_mappable_uid = -1
	I1210 06:28:08.864483  401365 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1210 06:28:08.864493  401365 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 06:28:08.864501  401365 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 06:28:08.864514  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864541  401365 command_runner.go:130] > # minimum_mappable_gid = -1
	I1210 06:28:08.864548  401365 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1210 06:28:08.864555  401365 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1210 06:28:08.864560  401365 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1210 06:28:08.864572  401365 command_runner.go:130] > # ctr_stop_timeout = 30
	I1210 06:28:08.864578  401365 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1210 06:28:08.864588  401365 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1210 06:28:08.864593  401365 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1210 06:28:08.864598  401365 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1210 06:28:08.864602  401365 command_runner.go:130] > # drop_infra_ctr = true
	I1210 06:28:08.864608  401365 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1210 06:28:08.864613  401365 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1210 06:28:08.864621  401365 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1210 06:28:08.864625  401365 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1210 06:28:08.864632  401365 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1210 06:28:08.864638  401365 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1210 06:28:08.864644  401365 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1210 06:28:08.864649  401365 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1210 06:28:08.864653  401365 command_runner.go:130] > # shared_cpuset = ""
	I1210 06:28:08.864659  401365 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1210 06:28:08.864664  401365 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1210 06:28:08.864668  401365 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1210 06:28:08.864675  401365 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1210 06:28:08.864858  401365 command_runner.go:130] > # pinns_path = ""
	I1210 06:28:08.864869  401365 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1210 06:28:08.864876  401365 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1210 06:28:08.864881  401365 command_runner.go:130] > # enable_criu_support = true
	I1210 06:28:08.864886  401365 command_runner.go:130] > # Enable/disable the generation of the container,
	I1210 06:28:08.864892  401365 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1210 06:28:08.864935  401365 command_runner.go:130] > # enable_pod_events = false
	I1210 06:28:08.864946  401365 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1210 06:28:08.864960  401365 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1210 06:28:08.865092  401365 command_runner.go:130] > # default_runtime = "crun"
	I1210 06:28:08.865104  401365 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1210 06:28:08.865112  401365 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1210 06:28:08.865122  401365 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1210 06:28:08.865127  401365 command_runner.go:130] > # creation as a file is not desired either.
	I1210 06:28:08.865136  401365 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1210 06:28:08.865141  401365 command_runner.go:130] > # the hostname is being managed dynamically.
	I1210 06:28:08.865146  401365 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1210 06:28:08.865148  401365 command_runner.go:130] > # ]
	I1210 06:28:08.865158  401365 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1210 06:28:08.865165  401365 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1210 06:28:08.865171  401365 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1210 06:28:08.865177  401365 command_runner.go:130] > # Each entry in the table should follow the format:
	I1210 06:28:08.865179  401365 command_runner.go:130] > #
	I1210 06:28:08.865200  401365 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1210 06:28:08.865207  401365 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1210 06:28:08.865210  401365 command_runner.go:130] > # runtime_type = "oci"
	I1210 06:28:08.865215  401365 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1210 06:28:08.865219  401365 command_runner.go:130] > # inherit_default_runtime = false
	I1210 06:28:08.865224  401365 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1210 06:28:08.865229  401365 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1210 06:28:08.865233  401365 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1210 06:28:08.865236  401365 command_runner.go:130] > # monitor_env = []
	I1210 06:28:08.865241  401365 command_runner.go:130] > # privileged_without_host_devices = false
	I1210 06:28:08.865245  401365 command_runner.go:130] > # allowed_annotations = []
	I1210 06:28:08.865250  401365 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1210 06:28:08.865253  401365 command_runner.go:130] > # no_sync_log = false
	I1210 06:28:08.865257  401365 command_runner.go:130] > # default_annotations = {}
	I1210 06:28:08.865261  401365 command_runner.go:130] > # stream_websockets = false
	I1210 06:28:08.865265  401365 command_runner.go:130] > # seccomp_profile = ""
	I1210 06:28:08.865296  401365 command_runner.go:130] > # Where:
	I1210 06:28:08.865301  401365 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1210 06:28:08.865308  401365 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1210 06:28:08.865314  401365 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1210 06:28:08.865320  401365 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1210 06:28:08.865323  401365 command_runner.go:130] > #   in $PATH.
	I1210 06:28:08.865330  401365 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1210 06:28:08.865334  401365 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1210 06:28:08.865341  401365 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1210 06:28:08.865344  401365 command_runner.go:130] > #   state.
	I1210 06:28:08.865352  401365 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1210 06:28:08.865360  401365 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1210 06:28:08.865368  401365 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1210 06:28:08.865376  401365 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1210 06:28:08.865381  401365 command_runner.go:130] > #   the values from the default runtime on load time.
	I1210 06:28:08.865387  401365 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1210 06:28:08.865392  401365 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1210 06:28:08.865399  401365 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1210 06:28:08.865406  401365 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1210 06:28:08.865411  401365 command_runner.go:130] > #   The currently recognized values are:
	I1210 06:28:08.865417  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1210 06:28:08.865425  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1210 06:28:08.865431  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1210 06:28:08.865437  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1210 06:28:08.865444  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1210 06:28:08.865451  401365 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1210 06:28:08.865458  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1210 06:28:08.865464  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1210 06:28:08.865470  401365 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1210 06:28:08.865492  401365 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1210 06:28:08.865501  401365 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1210 06:28:08.865507  401365 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1210 06:28:08.865513  401365 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1210 06:28:08.865519  401365 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1210 06:28:08.865525  401365 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1210 06:28:08.865533  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1210 06:28:08.865539  401365 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1210 06:28:08.865552  401365 command_runner.go:130] > #   deprecated option "conmon".
	I1210 06:28:08.865560  401365 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1210 06:28:08.865565  401365 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1210 06:28:08.865572  401365 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1210 06:28:08.865578  401365 command_runner.go:130] > #   should be moved to the container's cgroup
	I1210 06:28:08.865587  401365 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1210 06:28:08.865592  401365 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1210 06:28:08.865599  401365 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1210 06:28:08.865607  401365 command_runner.go:130] > #   conmon-rs by using:
	I1210 06:28:08.865615  401365 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1210 06:28:08.865622  401365 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1210 06:28:08.865630  401365 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1210 06:28:08.865636  401365 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1210 06:28:08.865642  401365 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1210 06:28:08.865649  401365 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1210 06:28:08.865657  401365 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1210 06:28:08.865661  401365 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1210 06:28:08.865669  401365 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1210 06:28:08.865677  401365 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1210 06:28:08.865685  401365 command_runner.go:130] > #   when a machine crash happens.
	I1210 06:28:08.865693  401365 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1210 06:28:08.865700  401365 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1210 06:28:08.865708  401365 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1210 06:28:08.865713  401365 command_runner.go:130] > #   seccomp profile for the runtime.
	I1210 06:28:08.865719  401365 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1210 06:28:08.865744  401365 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1210 06:28:08.865747  401365 command_runner.go:130] > #
	I1210 06:28:08.865751  401365 command_runner.go:130] > # Using the seccomp notifier feature:
	I1210 06:28:08.865754  401365 command_runner.go:130] > #
	I1210 06:28:08.865762  401365 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1210 06:28:08.865768  401365 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1210 06:28:08.865771  401365 command_runner.go:130] > #
	I1210 06:28:08.865777  401365 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1210 06:28:08.865783  401365 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1210 06:28:08.865785  401365 command_runner.go:130] > #
	I1210 06:28:08.865793  401365 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1210 06:28:08.865797  401365 command_runner.go:130] > # feature.
	I1210 06:28:08.865800  401365 command_runner.go:130] > #
	I1210 06:28:08.865807  401365 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1210 06:28:08.865813  401365 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1210 06:28:08.865819  401365 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1210 06:28:08.865832  401365 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1210 06:28:08.865838  401365 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1210 06:28:08.865841  401365 command_runner.go:130] > #
	I1210 06:28:08.865847  401365 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1210 06:28:08.865853  401365 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1210 06:28:08.865856  401365 command_runner.go:130] > #
	I1210 06:28:08.865862  401365 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1210 06:28:08.865870  401365 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1210 06:28:08.865873  401365 command_runner.go:130] > #
	I1210 06:28:08.865880  401365 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1210 06:28:08.865885  401365 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1210 06:28:08.865889  401365 command_runner.go:130] > # limitation.
	I1210 06:28:08.865905  401365 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1210 06:28:08.866331  401365 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1210 06:28:08.866426  401365 command_runner.go:130] > runtime_type = ""
	I1210 06:28:08.866446  401365 command_runner.go:130] > runtime_root = "/run/crun"
	I1210 06:28:08.866464  401365 command_runner.go:130] > inherit_default_runtime = false
	I1210 06:28:08.866497  401365 command_runner.go:130] > runtime_config_path = ""
	I1210 06:28:08.866524  401365 command_runner.go:130] > container_min_memory = ""
	I1210 06:28:08.866577  401365 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 06:28:08.866606  401365 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 06:28:08.866632  401365 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 06:28:08.866654  401365 command_runner.go:130] > allowed_annotations = [
	I1210 06:28:08.866675  401365 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1210 06:28:08.866694  401365 command_runner.go:130] > ]
	I1210 06:28:08.866728  401365 command_runner.go:130] > privileged_without_host_devices = false
	I1210 06:28:08.866748  401365 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1210 06:28:08.866769  401365 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1210 06:28:08.866790  401365 command_runner.go:130] > runtime_type = ""
	I1210 06:28:08.866821  401365 command_runner.go:130] > runtime_root = "/run/runc"
	I1210 06:28:08.866840  401365 command_runner.go:130] > inherit_default_runtime = false
	I1210 06:28:08.866860  401365 command_runner.go:130] > runtime_config_path = ""
	I1210 06:28:08.866880  401365 command_runner.go:130] > container_min_memory = ""
	I1210 06:28:08.866908  401365 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 06:28:08.866932  401365 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 06:28:08.866953  401365 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 06:28:08.866974  401365 command_runner.go:130] > privileged_without_host_devices = false
	I1210 06:28:08.867007  401365 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1210 06:28:08.867043  401365 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1210 06:28:08.867068  401365 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1210 06:28:08.867104  401365 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1210 06:28:08.867134  401365 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1210 06:28:08.867162  401365 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1210 06:28:08.867185  401365 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1210 06:28:08.867213  401365 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1210 06:28:08.867246  401365 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1210 06:28:08.867272  401365 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1210 06:28:08.867293  401365 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1210 06:28:08.867324  401365 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1210 06:28:08.867347  401365 command_runner.go:130] > # Example:
	I1210 06:28:08.867368  401365 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1210 06:28:08.867390  401365 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1210 06:28:08.867422  401365 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1210 06:28:08.867444  401365 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1210 06:28:08.867461  401365 command_runner.go:130] > # cpuset = "0-1"
	I1210 06:28:08.867481  401365 command_runner.go:130] > # cpushares = "5"
	I1210 06:28:08.867501  401365 command_runner.go:130] > # cpuquota = "1000"
	I1210 06:28:08.867527  401365 command_runner.go:130] > # cpuperiod = "100000"
	I1210 06:28:08.867550  401365 command_runner.go:130] > # cpulimit = "35"
	I1210 06:28:08.867570  401365 command_runner.go:130] > # Where:
	I1210 06:28:08.867591  401365 command_runner.go:130] > # The workload name is workload-type.
	I1210 06:28:08.867625  401365 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1210 06:28:08.867647  401365 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1210 06:28:08.867667  401365 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1210 06:28:08.867691  401365 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1210 06:28:08.867724  401365 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1210 06:28:08.867747  401365 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1210 06:28:08.867767  401365 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1210 06:28:08.867786  401365 command_runner.go:130] > # Default value is set to true
	I1210 06:28:08.867808  401365 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1210 06:28:08.867842  401365 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1210 06:28:08.867862  401365 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1210 06:28:08.867882  401365 command_runner.go:130] > # Default value is set to 'false'
	I1210 06:28:08.867915  401365 command_runner.go:130] > # disable_hostport_mapping = false
	I1210 06:28:08.867942  401365 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1210 06:28:08.867964  401365 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1210 06:28:08.867982  401365 command_runner.go:130] > # timezone = ""
	I1210 06:28:08.868015  401365 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1210 06:28:08.868041  401365 command_runner.go:130] > #
	I1210 06:28:08.868060  401365 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1210 06:28:08.868081  401365 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1210 06:28:08.868110  401365 command_runner.go:130] > [crio.image]
	I1210 06:28:08.868133  401365 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1210 06:28:08.868150  401365 command_runner.go:130] > # default_transport = "docker://"
	I1210 06:28:08.868170  401365 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1210 06:28:08.868192  401365 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1210 06:28:08.868219  401365 command_runner.go:130] > # global_auth_file = ""
	I1210 06:28:08.868243  401365 command_runner.go:130] > # The image used to instantiate infra containers.
	I1210 06:28:08.868264  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.868284  401365 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1210 06:28:08.868317  401365 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1210 06:28:08.868338  401365 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1210 06:28:08.868357  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.868374  401365 command_runner.go:130] > # pause_image_auth_file = ""
	I1210 06:28:08.868396  401365 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1210 06:28:08.868423  401365 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1210 06:28:08.868450  401365 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1210 06:28:08.868474  401365 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1210 06:28:08.868753  401365 command_runner.go:130] > # pause_command = "/pause"
	I1210 06:28:08.868765  401365 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1210 06:28:08.868772  401365 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1210 06:28:08.868778  401365 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1210 06:28:08.868784  401365 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1210 06:28:08.868791  401365 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1210 06:28:08.868797  401365 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1210 06:28:08.868802  401365 command_runner.go:130] > # pinned_images = [
	I1210 06:28:08.868834  401365 command_runner.go:130] > # ]
	I1210 06:28:08.868841  401365 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1210 06:28:08.868848  401365 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1210 06:28:08.868855  401365 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1210 06:28:08.868864  401365 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1210 06:28:08.868877  401365 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1210 06:28:08.868892  401365 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1210 06:28:08.868897  401365 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1210 06:28:08.868904  401365 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1210 06:28:08.868911  401365 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1210 06:28:08.868917  401365 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1210 06:28:08.868924  401365 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1210 06:28:08.868928  401365 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1210 06:28:08.868935  401365 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1210 06:28:08.868941  401365 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1210 06:28:08.868945  401365 command_runner.go:130] > # changing them here.
	I1210 06:28:08.868950  401365 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1210 06:28:08.868954  401365 command_runner.go:130] > # insecure_registries = [
	I1210 06:28:08.868957  401365 command_runner.go:130] > # ]
	I1210 06:28:08.868964  401365 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1210 06:28:08.868968  401365 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1210 06:28:08.868972  401365 command_runner.go:130] > # image_volumes = "mkdir"
	I1210 06:28:08.868978  401365 command_runner.go:130] > # Temporary directory to use for storing big files
	I1210 06:28:08.868982  401365 command_runner.go:130] > # big_files_temporary_dir = ""
	I1210 06:28:08.868988  401365 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1210 06:28:08.868995  401365 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1210 06:28:08.868999  401365 command_runner.go:130] > # auto_reload_registries = false
	I1210 06:28:08.869006  401365 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1210 06:28:08.869014  401365 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1210 06:28:08.869022  401365 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1210 06:28:08.869027  401365 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1210 06:28:08.869031  401365 command_runner.go:130] > # The mode of short name resolution.
	I1210 06:28:08.869039  401365 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1210 06:28:08.869047  401365 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1210 06:28:08.869051  401365 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1210 06:28:08.869055  401365 command_runner.go:130] > # short_name_mode = "enforcing"
	I1210 06:28:08.869061  401365 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1210 06:28:08.869067  401365 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1210 06:28:08.869299  401365 command_runner.go:130] > # oci_artifact_mount_support = true
	I1210 06:28:08.869316  401365 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1210 06:28:08.869329  401365 command_runner.go:130] > # CNI plugins.
	I1210 06:28:08.869333  401365 command_runner.go:130] > [crio.network]
	I1210 06:28:08.869340  401365 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1210 06:28:08.869346  401365 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1210 06:28:08.869485  401365 command_runner.go:130] > # cni_default_network = ""
	I1210 06:28:08.869502  401365 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1210 06:28:08.869709  401365 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1210 06:28:08.869721  401365 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1210 06:28:08.869725  401365 command_runner.go:130] > # plugin_dirs = [
	I1210 06:28:08.869729  401365 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1210 06:28:08.869732  401365 command_runner.go:130] > # ]
	I1210 06:28:08.869736  401365 command_runner.go:130] > # List of included pod metrics.
	I1210 06:28:08.869740  401365 command_runner.go:130] > # included_pod_metrics = [
	I1210 06:28:08.869743  401365 command_runner.go:130] > # ]
	I1210 06:28:08.869749  401365 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1210 06:28:08.869752  401365 command_runner.go:130] > [crio.metrics]
	I1210 06:28:08.869757  401365 command_runner.go:130] > # Globally enable or disable metrics support.
	I1210 06:28:08.869763  401365 command_runner.go:130] > # enable_metrics = false
	I1210 06:28:08.869767  401365 command_runner.go:130] > # Specify enabled metrics collectors.
	I1210 06:28:08.869772  401365 command_runner.go:130] > # Per default all metrics are enabled.
	I1210 06:28:08.869778  401365 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1210 06:28:08.869785  401365 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1210 06:28:08.869791  401365 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1210 06:28:08.869796  401365 command_runner.go:130] > # metrics_collectors = [
	I1210 06:28:08.869800  401365 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1210 06:28:08.869805  401365 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1210 06:28:08.869809  401365 command_runner.go:130] > # 	"containers_oom_total",
	I1210 06:28:08.869813  401365 command_runner.go:130] > # 	"processes_defunct",
	I1210 06:28:08.869817  401365 command_runner.go:130] > # 	"operations_total",
	I1210 06:28:08.869821  401365 command_runner.go:130] > # 	"operations_latency_seconds",
	I1210 06:28:08.869826  401365 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1210 06:28:08.869830  401365 command_runner.go:130] > # 	"operations_errors_total",
	I1210 06:28:08.869834  401365 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1210 06:28:08.869839  401365 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1210 06:28:08.869843  401365 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1210 06:28:08.869851  401365 command_runner.go:130] > # 	"image_pulls_success_total",
	I1210 06:28:08.869855  401365 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1210 06:28:08.869860  401365 command_runner.go:130] > # 	"containers_oom_count_total",
	I1210 06:28:08.869865  401365 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1210 06:28:08.869873  401365 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1210 06:28:08.869878  401365 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1210 06:28:08.869881  401365 command_runner.go:130] > # ]
	I1210 06:28:08.869887  401365 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1210 06:28:08.869891  401365 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1210 06:28:08.869896  401365 command_runner.go:130] > # The port on which the metrics server will listen.
	I1210 06:28:08.869901  401365 command_runner.go:130] > # metrics_port = 9090
	I1210 06:28:08.869906  401365 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1210 06:28:08.869910  401365 command_runner.go:130] > # metrics_socket = ""
	I1210 06:28:08.869915  401365 command_runner.go:130] > # The certificate for the secure metrics server.
	I1210 06:28:08.869921  401365 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1210 06:28:08.869928  401365 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1210 06:28:08.869934  401365 command_runner.go:130] > # certificate on any modification event.
	I1210 06:28:08.869938  401365 command_runner.go:130] > # metrics_cert = ""
	I1210 06:28:08.869943  401365 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1210 06:28:08.869948  401365 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1210 06:28:08.869963  401365 command_runner.go:130] > # metrics_key = ""
	I1210 06:28:08.869970  401365 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1210 06:28:08.869973  401365 command_runner.go:130] > [crio.tracing]
	I1210 06:28:08.869978  401365 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1210 06:28:08.869982  401365 command_runner.go:130] > # enable_tracing = false
	I1210 06:28:08.869987  401365 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1210 06:28:08.869992  401365 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1210 06:28:08.869999  401365 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1210 06:28:08.870003  401365 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1210 06:28:08.870007  401365 command_runner.go:130] > # CRI-O NRI configuration.
	I1210 06:28:08.870010  401365 command_runner.go:130] > [crio.nri]
	I1210 06:28:08.870014  401365 command_runner.go:130] > # Globally enable or disable NRI.
	I1210 06:28:08.870018  401365 command_runner.go:130] > # enable_nri = true
	I1210 06:28:08.870022  401365 command_runner.go:130] > # NRI socket to listen on.
	I1210 06:28:08.870026  401365 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1210 06:28:08.870031  401365 command_runner.go:130] > # NRI plugin directory to use.
	I1210 06:28:08.870035  401365 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1210 06:28:08.870044  401365 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1210 06:28:08.870049  401365 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1210 06:28:08.870054  401365 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1210 06:28:08.870120  401365 command_runner.go:130] > # nri_disable_connections = false
	I1210 06:28:08.870126  401365 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1210 06:28:08.870131  401365 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1210 06:28:08.870136  401365 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1210 06:28:08.870140  401365 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1210 06:28:08.870144  401365 command_runner.go:130] > # NRI default validator configuration.
	I1210 06:28:08.870151  401365 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1210 06:28:08.870158  401365 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1210 06:28:08.870166  401365 command_runner.go:130] > # can be restricted/rejected:
	I1210 06:28:08.870170  401365 command_runner.go:130] > # - OCI hook injection
	I1210 06:28:08.870176  401365 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1210 06:28:08.870182  401365 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1210 06:28:08.870187  401365 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1210 06:28:08.870192  401365 command_runner.go:130] > # - adjustment of linux namespaces
	I1210 06:28:08.870198  401365 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1210 06:28:08.870204  401365 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1210 06:28:08.870211  401365 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1210 06:28:08.870214  401365 command_runner.go:130] > #
	I1210 06:28:08.870219  401365 command_runner.go:130] > # [crio.nri.default_validator]
	I1210 06:28:08.870224  401365 command_runner.go:130] > # nri_enable_default_validator = false
	I1210 06:28:08.870229  401365 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1210 06:28:08.870235  401365 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1210 06:28:08.870240  401365 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1210 06:28:08.870245  401365 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1210 06:28:08.870249  401365 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1210 06:28:08.870254  401365 command_runner.go:130] > # nri_validator_required_plugins = [
	I1210 06:28:08.870256  401365 command_runner.go:130] > # ]
	I1210 06:28:08.870261  401365 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1210 06:28:08.870267  401365 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1210 06:28:08.870270  401365 command_runner.go:130] > [crio.stats]
	I1210 06:28:08.870279  401365 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1210 06:28:08.870285  401365 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1210 06:28:08.870289  401365 command_runner.go:130] > # stats_collection_period = 0
	I1210 06:28:08.870295  401365 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1210 06:28:08.870301  401365 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1210 06:28:08.870309  401365 command_runner.go:130] > # collection_period = 0
	I1210 06:28:08.872234  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.838776003Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1210 06:28:08.872284  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.838812886Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1210 06:28:08.872309  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.838840094Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1210 06:28:08.872334  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.839193559Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1210 06:28:08.872381  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.839375723Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:08.872413  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.839707715Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1210 06:28:08.872441  401365 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1210 06:28:08.872553  401365 cni.go:84] Creating CNI manager for ""
	I1210 06:28:08.872583  401365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:28:08.872624  401365 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:28:08.872677  401365 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-253997 NodeName:functional-253997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:28:08.872842  401365 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-253997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:28:08.872963  401365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:28:08.882589  401365 command_runner.go:130] > kubeadm
	I1210 06:28:08.882664  401365 command_runner.go:130] > kubectl
	I1210 06:28:08.882683  401365 command_runner.go:130] > kubelet
	I1210 06:28:08.883772  401365 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:28:08.883860  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:28:08.894311  401365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 06:28:08.917477  401365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:28:08.933123  401365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1210 06:28:08.951215  401365 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:28:08.955022  401365 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 06:28:08.955137  401365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:28:09.068336  401365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:28:09.626369  401365 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997 for IP: 192.168.49.2
	I1210 06:28:09.626393  401365 certs.go:195] generating shared ca certs ...
	I1210 06:28:09.626411  401365 certs.go:227] acquiring lock for ca certs: {Name:mk6fdc2cadbd147112797261b2432f4e1e90b685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:09.626560  401365 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key
	I1210 06:28:09.626610  401365 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key
	I1210 06:28:09.626622  401365 certs.go:257] generating profile certs ...
	I1210 06:28:09.626723  401365 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key
	I1210 06:28:09.626797  401365 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key.d56e9423
	I1210 06:28:09.626842  401365 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key
	I1210 06:28:09.626855  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 06:28:09.626868  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 06:28:09.626879  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 06:28:09.626895  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 06:28:09.626917  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 06:28:09.626934  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 06:28:09.626951  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 06:28:09.626967  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 06:28:09.627018  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem (1338 bytes)
	W1210 06:28:09.627054  401365 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265_empty.pem, impossibly tiny 0 bytes
	I1210 06:28:09.627067  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:28:09.627098  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:28:09.627129  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:28:09.627160  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem (1675 bytes)
	I1210 06:28:09.627208  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:28:09.627243  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.627257  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem -> /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.627269  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> /usr/share/ca-certificates/3642652.pem
	I1210 06:28:09.627907  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:28:09.646839  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:28:09.665451  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:28:09.684144  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:28:09.703168  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:28:09.722766  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:28:09.740755  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:28:09.758979  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:28:09.777915  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:28:09.796193  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem --> /usr/share/ca-certificates/364265.pem (1338 bytes)
	I1210 06:28:09.814097  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /usr/share/ca-certificates/3642652.pem (1708 bytes)
	I1210 06:28:09.831978  401365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:28:09.845391  401365 ssh_runner.go:195] Run: openssl version
	I1210 06:28:09.851779  401365 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 06:28:09.852274  401365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.860146  401365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:28:09.868064  401365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.872198  401365 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.872310  401365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.872381  401365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.915298  401365 command_runner.go:130] > b5213941
	I1210 06:28:09.915776  401365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:28:09.923881  401365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.931564  401365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364265.pem /etc/ssl/certs/364265.pem
	I1210 06:28:09.939347  401365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.943515  401365 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.943602  401365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.943706  401365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.984596  401365 command_runner.go:130] > 51391683
	I1210 06:28:09.985095  401365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:28:09.992884  401365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.000682  401365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3642652.pem /etc/ssl/certs/3642652.pem
	I1210 06:28:10.009973  401365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.015475  401365 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.015546  401365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.015611  401365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.058412  401365 command_runner.go:130] > 3ec20f2e
	I1210 06:28:10.059028  401365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:28:10.067481  401365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:28:10.072097  401365 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:28:10.072141  401365 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 06:28:10.072148  401365 command_runner.go:130] > Device: 259,1	Inode: 3906312     Links: 1
	I1210 06:28:10.072155  401365 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:28:10.072162  401365 command_runner.go:130] > Access: 2025-12-10 06:24:00.744386425 +0000
	I1210 06:28:10.072185  401365 command_runner.go:130] > Modify: 2025-12-10 06:19:55.737291822 +0000
	I1210 06:28:10.072211  401365 command_runner.go:130] > Change: 2025-12-10 06:19:55.737291822 +0000
	I1210 06:28:10.072217  401365 command_runner.go:130] >  Birth: 2025-12-10 06:19:55.737291822 +0000
	I1210 06:28:10.072295  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:28:10.114065  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.114701  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:28:10.156441  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.157041  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:28:10.198547  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.198997  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:28:10.239473  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.239921  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:28:10.280741  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.281284  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:28:10.322073  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.322510  401365 kubeadm.go:401] StartCluster: {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:28:10.322592  401365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:28:10.322670  401365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:28:10.349813  401365 cri.go:89] found id: ""
	I1210 06:28:10.349915  401365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:28:10.357053  401365 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 06:28:10.357076  401365 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 06:28:10.357083  401365 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 06:28:10.358087  401365 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:28:10.358107  401365 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:28:10.358179  401365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:28:10.366355  401365 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:28:10.366773  401365 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-253997" does not appear in /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.366892  401365 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-362392/kubeconfig needs updating (will repair): [kubeconfig missing "functional-253997" cluster setting kubeconfig missing "functional-253997" context setting]
	I1210 06:28:10.367176  401365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/kubeconfig: {Name:mk64e356b1eb31ef8982d6b594101aff5f90a6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:10.367620  401365 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.367775  401365 kapi.go:59] client config for functional-253997: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key", CAFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:28:10.368328  401365 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:28:10.368348  401365 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:28:10.368357  401365 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:28:10.368361  401365 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:28:10.368366  401365 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:28:10.368683  401365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:28:10.368778  401365 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 06:28:10.376809  401365 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 06:28:10.376842  401365 kubeadm.go:602] duration metric: took 18.728652ms to restartPrimaryControlPlane
	I1210 06:28:10.376852  401365 kubeadm.go:403] duration metric: took 54.348915ms to StartCluster
	I1210 06:28:10.376867  401365 settings.go:142] acquiring lock: {Name:mk50f3096e55008942432e93c6cf4976a70e957e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:10.376930  401365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.377580  401365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/kubeconfig: {Name:mk64e356b1eb31ef8982d6b594101aff5f90a6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:10.377783  401365 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:28:10.378131  401365 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:28:10.378203  401365 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:28:10.378273  401365 addons.go:70] Setting storage-provisioner=true in profile "functional-253997"
	I1210 06:28:10.378288  401365 addons.go:239] Setting addon storage-provisioner=true in "functional-253997"
	I1210 06:28:10.378298  401365 addons.go:70] Setting default-storageclass=true in profile "functional-253997"
	I1210 06:28:10.378308  401365 host.go:66] Checking if "functional-253997" exists ...
	I1210 06:28:10.378325  401365 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-253997"
	I1210 06:28:10.378609  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:10.378772  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:10.382148  401365 out.go:179] * Verifying Kubernetes components...
	I1210 06:28:10.385829  401365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:28:10.411769  401365 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.411927  401365 kapi.go:59] client config for functional-253997: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key", CAFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:28:10.412189  401365 addons.go:239] Setting addon default-storageclass=true in "functional-253997"
	I1210 06:28:10.412217  401365 host.go:66] Checking if "functional-253997" exists ...
	I1210 06:28:10.412622  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:10.423310  401365 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:28:10.429289  401365 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:10.429319  401365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:28:10.429390  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:10.437508  401365 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:10.437529  401365 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:28:10.437602  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:10.484090  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:10.489523  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:10.601993  401365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:28:10.611397  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:10.637290  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:11.377346  401365 node_ready.go:35] waiting up to 6m0s for node "functional-253997" to be "Ready" ...
	I1210 06:28:11.377544  401365 type.go:168] "Request Body" body=""
	I1210 06:28:11.377656  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.377728  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	W1210 06:28:11.377850  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.377894  401365 retry.go:31] will retry after 259.470683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.378104  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.378200  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.378242  401365 retry.go:31] will retry after 196.4073ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.378345  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:11.575829  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:11.638697  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:11.638779  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.638826  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.638871  401365 retry.go:31] will retry after 208.428392ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.692820  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.696338  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.696370  401365 retry.go:31] will retry after 282.781918ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.847619  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:11.878109  401365 type.go:168] "Request Body" body=""
	I1210 06:28:11.878199  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:11.878519  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:11.905645  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.908839  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.908880  401365 retry.go:31] will retry after 582.02813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.980121  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:12.039691  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:12.043135  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.043170  401365 retry.go:31] will retry after 432.314142ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.377666  401365 type.go:168] "Request Body" body=""
	I1210 06:28:12.377744  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:12.378081  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:12.476496  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:12.492099  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:12.562290  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:12.562336  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.562356  401365 retry.go:31] will retry after 1.009011504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.562409  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:12.562427  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.562433  401365 retry.go:31] will retry after 937.221861ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.877643  401365 type.go:168] "Request Body" body=""
	I1210 06:28:12.877787  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:12.878157  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:13.377677  401365 type.go:168] "Request Body" body=""
	I1210 06:28:13.377754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:13.378100  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:13.378160  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:13.500598  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:13.556443  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:13.560062  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.560116  401365 retry.go:31] will retry after 1.265541277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.572329  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:13.633856  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:13.637464  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.637509  401365 retry.go:31] will retry after 1.331173049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.877793  401365 type.go:168] "Request Body" body=""
	I1210 06:28:13.877888  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:13.878199  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:14.377638  401365 type.go:168] "Request Body" body=""
	I1210 06:28:14.377730  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:14.378082  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:14.825793  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:14.878190  401365 type.go:168] "Request Body" body=""
	I1210 06:28:14.878261  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:14.878521  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:14.884055  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:14.884152  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:14.884201  401365 retry.go:31] will retry after 1.396995132s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:14.969467  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:15.059973  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:15.064387  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:15.064489  401365 retry.go:31] will retry after 957.92161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:15.377700  401365 type.go:168] "Request Body" body=""
	I1210 06:28:15.377772  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:15.378126  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:15.378206  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:15.877555  401365 type.go:168] "Request Body" body=""
	I1210 06:28:15.877664  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:15.877987  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:16.023398  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:16.083212  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:16.083269  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.083288  401365 retry.go:31] will retry after 3.316582994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.281469  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:16.346229  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:16.346265  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.346285  401365 retry.go:31] will retry after 2.05295153s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.378603  401365 type.go:168] "Request Body" body=""
	I1210 06:28:16.378688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:16.379017  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:16.877615  401365 type.go:168] "Request Body" body=""
	I1210 06:28:16.877690  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:16.878019  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:17.377588  401365 type.go:168] "Request Body" body=""
	I1210 06:28:17.377663  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:17.377946  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:17.877651  401365 type.go:168] "Request Body" body=""
	I1210 06:28:17.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:17.878120  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:17.878201  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:18.377673  401365 type.go:168] "Request Body" body=""
	I1210 06:28:18.377749  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:18.378108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:18.400386  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:18.462469  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:18.462509  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:18.462528  401365 retry.go:31] will retry after 3.621738225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:18.877637  401365 type.go:168] "Request Body" body=""
	I1210 06:28:18.877719  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:18.878031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:19.377699  401365 type.go:168] "Request Body" body=""
	I1210 06:28:19.377775  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:19.378123  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:19.400389  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:19.462507  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:19.462542  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:19.462562  401365 retry.go:31] will retry after 6.347571238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:19.878220  401365 type.go:168] "Request Body" body=""
	I1210 06:28:19.878303  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:19.878573  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:19.878624  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:20.378571  401365 type.go:168] "Request Body" body=""
	I1210 06:28:20.378643  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:20.378957  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:20.877707  401365 type.go:168] "Request Body" body=""
	I1210 06:28:20.877781  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:20.878082  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:21.377732  401365 type.go:168] "Request Body" body=""
	I1210 06:28:21.377840  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:21.378217  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:21.877933  401365 type.go:168] "Request Body" body=""
	I1210 06:28:21.878005  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:21.878280  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:22.084823  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:22.150796  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:22.150852  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:22.150872  401365 retry.go:31] will retry after 8.518894464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:22.378239  401365 type.go:168] "Request Body" body=""
	I1210 06:28:22.378314  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:22.378638  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:22.378700  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:22.878392  401365 type.go:168] "Request Body" body=""
	I1210 06:28:22.878470  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:22.878811  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:23.378412  401365 type.go:168] "Request Body" body=""
	I1210 06:28:23.378493  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:23.378816  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:23.878580  401365 type.go:168] "Request Body" body=""
	I1210 06:28:23.878657  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:23.879035  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:24.377745  401365 type.go:168] "Request Body" body=""
	I1210 06:28:24.377829  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:24.378165  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:24.878042  401365 type.go:168] "Request Body" body=""
	I1210 06:28:24.878110  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:24.878379  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:24.878424  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:25.378073  401365 type.go:168] "Request Body" body=""
	I1210 06:28:25.378148  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:25.378492  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:25.811094  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:25.867131  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:25.870279  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:25.870312  401365 retry.go:31] will retry after 4.064346895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:25.878534  401365 type.go:168] "Request Body" body=""
	I1210 06:28:25.878610  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:25.878933  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:26.378357  401365 type.go:168] "Request Body" body=""
	I1210 06:28:26.378423  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:26.378728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:26.878539  401365 type.go:168] "Request Body" body=""
	I1210 06:28:26.878615  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:26.878903  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:26.878950  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:27.377638  401365 type.go:168] "Request Body" body=""
	I1210 06:28:27.377740  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:27.378052  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:27.878386  401365 type.go:168] "Request Body" body=""
	I1210 06:28:27.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:27.878757  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:28.378587  401365 type.go:168] "Request Body" body=""
	I1210 06:28:28.378670  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:28.379016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:28.877671  401365 type.go:168] "Request Body" body=""
	I1210 06:28:28.877745  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:28.878083  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:29.378412  401365 type.go:168] "Request Body" body=""
	I1210 06:28:29.378486  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:29.378756  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:29.378811  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:29.877691  401365 type.go:168] "Request Body" body=""
	I1210 06:28:29.877767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:29.878126  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:29.935383  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:29.993267  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:29.993316  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:29.993335  401365 retry.go:31] will retry after 13.293540925s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:30.377660  401365 type.go:168] "Request Body" body=""
	I1210 06:28:30.377733  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:30.378062  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:30.670723  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:30.731809  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:30.735358  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:30.735395  401365 retry.go:31] will retry after 6.439855049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:30.877636  401365 type.go:168] "Request Body" body=""
	I1210 06:28:30.877707  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:30.878037  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:31.377698  401365 type.go:168] "Request Body" body=""
	I1210 06:28:31.377773  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:31.378112  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:31.877691  401365 type.go:168] "Request Body" body=""
	I1210 06:28:31.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:31.878135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:31.878196  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:32.377829  401365 type.go:168] "Request Body" body=""
	I1210 06:28:32.377902  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:32.378167  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:32.877646  401365 type.go:168] "Request Body" body=""
	I1210 06:28:32.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:32.878081  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:33.377665  401365 type.go:168] "Request Body" body=""
	I1210 06:28:33.377750  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:33.378080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:33.878372  401365 type.go:168] "Request Body" body=""
	I1210 06:28:33.878447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:33.878725  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:33.878768  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:34.378621  401365 type.go:168] "Request Body" body=""
	I1210 06:28:34.378700  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:34.379046  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:34.877880  401365 type.go:168] "Request Body" body=""
	I1210 06:28:34.877952  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:34.878345  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:35.378044  401365 type.go:168] "Request Body" body=""
	I1210 06:28:35.378114  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:35.378389  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:35.878221  401365 type.go:168] "Request Body" body=""
	I1210 06:28:35.878303  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:35.878728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:35.878785  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:36.378584  401365 type.go:168] "Request Body" body=""
	I1210 06:28:36.378665  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:36.379008  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:36.878369  401365 type.go:168] "Request Body" body=""
	I1210 06:28:36.878444  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:36.878707  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:37.176405  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:37.232388  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:37.235885  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:37.235920  401365 retry.go:31] will retry after 10.78688793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:37.378207  401365 type.go:168] "Request Body" body=""
	I1210 06:28:37.378282  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:37.378581  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:37.878419  401365 type.go:168] "Request Body" body=""
	I1210 06:28:37.878495  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:37.878813  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:37.878863  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:38.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:28:38.378474  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:38.378754  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:38.878544  401365 type.go:168] "Request Body" body=""
	I1210 06:28:38.878622  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:38.878987  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:39.377713  401365 type.go:168] "Request Body" body=""
	I1210 06:28:39.377797  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:39.378129  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:39.878083  401365 type.go:168] "Request Body" body=""
	I1210 06:28:39.878150  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:39.878429  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:40.378442  401365 type.go:168] "Request Body" body=""
	I1210 06:28:40.378523  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:40.378856  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:40.378911  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:40.877583  401365 type.go:168] "Request Body" body=""
	I1210 06:28:40.877679  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:40.878034  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:41.378374  401365 type.go:168] "Request Body" body=""
	I1210 06:28:41.378447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:41.378715  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:41.878491  401365 type.go:168] "Request Body" body=""
	I1210 06:28:41.878566  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:41.878923  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:42.377671  401365 type.go:168] "Request Body" body=""
	I1210 06:28:42.377751  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:42.378141  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:42.877599  401365 type.go:168] "Request Body" body=""
	I1210 06:28:42.877683  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:42.877945  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:42.877984  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:43.287649  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:43.346928  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:43.346975  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:43.346995  401365 retry.go:31] will retry after 14.625741063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:43.378235  401365 type.go:168] "Request Body" body=""
	I1210 06:28:43.378315  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:43.378642  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:43.878431  401365 type.go:168] "Request Body" body=""
	I1210 06:28:43.878526  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:43.878848  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:44.378341  401365 type.go:168] "Request Body" body=""
	I1210 06:28:44.378412  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:44.378674  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:44.877586  401365 type.go:168] "Request Body" body=""
	I1210 06:28:44.877680  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:44.878028  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:44.878086  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:45.377798  401365 type.go:168] "Request Body" body=""
	I1210 06:28:45.377879  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:45.378245  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:45.878503  401365 type.go:168] "Request Body" body=""
	I1210 06:28:45.878572  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:45.878831  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:46.378595  401365 type.go:168] "Request Body" body=""
	I1210 06:28:46.378672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:46.378982  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:46.877682  401365 type.go:168] "Request Body" body=""
	I1210 06:28:46.877759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:46.878095  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:46.878155  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:47.377841  401365 type.go:168] "Request Body" body=""
	I1210 06:28:47.377917  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:47.378263  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:47.877992  401365 type.go:168] "Request Body" body=""
	I1210 06:28:47.878078  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:47.878438  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:48.023828  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:48.081536  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:48.084895  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:48.084933  401365 retry.go:31] will retry after 18.097374996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:48.378332  401365 type.go:168] "Request Body" body=""
	I1210 06:28:48.378422  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:48.378753  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:48.878419  401365 type.go:168] "Request Body" body=""
	I1210 06:28:48.878497  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:48.878762  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:48.878816  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:49.378574  401365 type.go:168] "Request Body" body=""
	I1210 06:28:49.378648  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:49.378945  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:49.877700  401365 type.go:168] "Request Body" body=""
	I1210 06:28:49.877800  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:49.878143  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:50.377920  401365 type.go:168] "Request Body" body=""
	I1210 06:28:50.377988  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:50.378294  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:50.877693  401365 type.go:168] "Request Body" body=""
	I1210 06:28:50.877785  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:50.878095  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:51.377686  401365 type.go:168] "Request Body" body=""
	I1210 06:28:51.377791  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:51.378134  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:51.378207  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:51.877781  401365 type.go:168] "Request Body" body=""
	I1210 06:28:51.877851  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:51.878166  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:52.377911  401365 type.go:168] "Request Body" body=""
	I1210 06:28:52.377995  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:52.378322  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:52.878024  401365 type.go:168] "Request Body" body=""
	I1210 06:28:52.878097  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:52.878439  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:53.377622  401365 type.go:168] "Request Body" body=""
	I1210 06:28:53.377713  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:53.378024  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:53.877755  401365 type.go:168] "Request Body" body=""
	I1210 06:28:53.877852  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:53.878190  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:53.878248  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:54.377697  401365 type.go:168] "Request Body" body=""
	I1210 06:28:54.377777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:54.378108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:54.877974  401365 type.go:168] "Request Body" body=""
	I1210 06:28:54.878043  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:54.878312  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:55.378006  401365 type.go:168] "Request Body" body=""
	I1210 06:28:55.378086  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:55.378481  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:55.878103  401365 type.go:168] "Request Body" body=""
	I1210 06:28:55.878195  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:55.878572  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:55.878630  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:56.378220  401365 type.go:168] "Request Body" body=""
	I1210 06:28:56.378297  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:56.378560  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:56.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:28:56.878464  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:56.878801  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:57.378592  401365 type.go:168] "Request Body" body=""
	I1210 06:28:57.378672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:57.379008  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:57.877621  401365 type.go:168] "Request Body" body=""
	I1210 06:28:57.877696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:57.878001  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:57.973321  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:58.030522  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:58.034296  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:58.034334  401365 retry.go:31] will retry after 29.63385811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:58.377818  401365 type.go:168] "Request Body" body=""
	I1210 06:28:58.377897  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:58.378240  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:58.378316  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:58.878004  401365 type.go:168] "Request Body" body=""
	I1210 06:28:58.878100  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:58.878429  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:59.378237  401365 type.go:168] "Request Body" body=""
	I1210 06:28:59.378307  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:59.378610  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:59.878397  401365 type.go:168] "Request Body" body=""
	I1210 06:28:59.878486  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:59.878865  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:00.377830  401365 type.go:168] "Request Body" body=""
	I1210 06:29:00.377911  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:00.378308  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:00.378388  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:00.877903  401365 type.go:168] "Request Body" body=""
	I1210 06:29:00.877979  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:00.878324  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:01.378045  401365 type.go:168] "Request Body" body=""
	I1210 06:29:01.378142  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:01.378492  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:01.878290  401365 type.go:168] "Request Body" body=""
	I1210 06:29:01.878364  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:01.878682  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:02.378481  401365 type.go:168] "Request Body" body=""
	I1210 06:29:02.378563  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:02.378938  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:02.379007  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:02.877673  401365 type.go:168] "Request Body" body=""
	I1210 06:29:02.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:02.878144  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:03.378397  401365 type.go:168] "Request Body" body=""
	I1210 06:29:03.378485  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:03.378752  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:03.878546  401365 type.go:168] "Request Body" body=""
	I1210 06:29:03.878622  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:03.878936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:04.377644  401365 type.go:168] "Request Body" body=""
	I1210 06:29:04.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:04.378092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:04.877915  401365 type.go:168] "Request Body" body=""
	I1210 06:29:04.877993  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:04.878265  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:04.878310  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:05.377970  401365 type.go:168] "Request Body" body=""
	I1210 06:29:05.378056  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:05.378385  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:05.877707  401365 type.go:168] "Request Body" body=""
	I1210 06:29:05.877783  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:05.878096  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:06.182558  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:29:06.240148  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:06.243928  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:29:06.243964  401365 retry.go:31] will retry after 43.852698404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:29:06.378184  401365 type.go:168] "Request Body" body=""
	I1210 06:29:06.378259  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:06.378534  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:06.878434  401365 type.go:168] "Request Body" body=""
	I1210 06:29:06.878516  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:06.878892  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:06.878963  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:07.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:29:07.377787  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:07.378114  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:07.878358  401365 type.go:168] "Request Body" body=""
	I1210 06:29:07.878442  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:07.878707  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:08.378589  401365 type.go:168] "Request Body" body=""
	I1210 06:29:08.378685  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:08.379006  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:08.877738  401365 type.go:168] "Request Body" body=""
	I1210 06:29:08.877836  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:08.878152  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:09.377599  401365 type.go:168] "Request Body" body=""
	I1210 06:29:09.377678  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:09.377964  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:09.378009  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:09.878613  401365 type.go:168] "Request Body" body=""
	I1210 06:29:09.878706  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:09.879055  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:10.377636  401365 type.go:168] "Request Body" body=""
	I1210 06:29:10.377729  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:10.378057  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:10.878414  401365 type.go:168] "Request Body" body=""
	I1210 06:29:10.878485  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:10.878853  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:11.377614  401365 type.go:168] "Request Body" body=""
	I1210 06:29:11.377691  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:11.378087  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:11.378157  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:11.877843  401365 type.go:168] "Request Body" body=""
	I1210 06:29:11.877917  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:11.878206  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:12.377859  401365 type.go:168] "Request Body" body=""
	I1210 06:29:12.377937  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:12.378284  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:12.877669  401365 type.go:168] "Request Body" body=""
	I1210 06:29:12.877752  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:12.878075  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:13.377664  401365 type.go:168] "Request Body" body=""
	I1210 06:29:13.377747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:13.378093  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:13.878416  401365 type.go:168] "Request Body" body=""
	I1210 06:29:13.878494  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:13.878809  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:13.878870  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:14.377568  401365 type.go:168] "Request Body" body=""
	I1210 06:29:14.377643  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:14.377998  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:14.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:29:14.877755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:14.878113  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:15.377598  401365 type.go:168] "Request Body" body=""
	I1210 06:29:15.377677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:15.377997  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:15.877667  401365 type.go:168] "Request Body" body=""
	I1210 06:29:15.877746  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:15.878091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:16.377666  401365 type.go:168] "Request Body" body=""
	I1210 06:29:16.377769  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:16.378076  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:16.378122  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:16.877619  401365 type.go:168] "Request Body" body=""
	I1210 06:29:16.877702  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:16.878021  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:17.377665  401365 type.go:168] "Request Body" body=""
	I1210 06:29:17.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:17.378090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:17.877642  401365 type.go:168] "Request Body" body=""
	I1210 06:29:17.877734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:17.878072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:18.378371  401365 type.go:168] "Request Body" body=""
	I1210 06:29:18.378462  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:18.378766  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:18.378828  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:18.878580  401365 type.go:168] "Request Body" body=""
	I1210 06:29:18.878658  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:18.879021  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:19.377663  401365 type.go:168] "Request Body" body=""
	I1210 06:29:19.377746  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:19.378079  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:19.877904  401365 type.go:168] "Request Body" body=""
	I1210 06:29:19.878012  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:19.878270  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:20.378288  401365 type.go:168] "Request Body" body=""
	I1210 06:29:20.378362  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:20.378707  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:20.878519  401365 type.go:168] "Request Body" body=""
	I1210 06:29:20.878594  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:20.878915  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:20.878972  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:21.377633  401365 type.go:168] "Request Body" body=""
	I1210 06:29:21.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:21.378102  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:21.877674  401365 type.go:168] "Request Body" body=""
	I1210 06:29:21.877777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:21.878104  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:22.377691  401365 type.go:168] "Request Body" body=""
	I1210 06:29:22.377786  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:22.378137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:22.877604  401365 type.go:168] "Request Body" body=""
	I1210 06:29:22.877691  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:22.877964  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:23.377675  401365 type.go:168] "Request Body" body=""
	I1210 06:29:23.377762  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:23.378116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:23.378205  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:23.877699  401365 type.go:168] "Request Body" body=""
	I1210 06:29:23.877817  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:23.878164  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:24.377849  401365 type.go:168] "Request Body" body=""
	I1210 06:29:24.377937  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:24.378276  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:24.878345  401365 type.go:168] "Request Body" body=""
	I1210 06:29:24.878419  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:24.878834  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:25.378514  401365 type.go:168] "Request Body" body=""
	I1210 06:29:25.378602  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:25.378940  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:25.378995  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:25.878340  401365 type.go:168] "Request Body" body=""
	I1210 06:29:25.878408  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:25.878688  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:26.378495  401365 type.go:168] "Request Body" body=""
	I1210 06:29:26.378583  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:26.378915  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:26.877643  401365 type.go:168] "Request Body" body=""
	I1210 06:29:26.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:26.878074  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:27.378388  401365 type.go:168] "Request Body" body=""
	I1210 06:29:27.378458  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:27.378728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:27.669323  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:29:27.726986  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:27.731088  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:27.731190  401365 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:29:27.878451  401365 type.go:168] "Request Body" body=""
	I1210 06:29:27.878523  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:27.878853  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:27.878910  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:28.378489  401365 type.go:168] "Request Body" body=""
	I1210 06:29:28.378564  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:28.378901  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:28.878380  401365 type.go:168] "Request Body" body=""
	I1210 06:29:28.878458  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:28.878719  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:29.378449  401365 type.go:168] "Request Body" body=""
	I1210 06:29:29.378529  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:29.378849  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:29.877584  401365 type.go:168] "Request Body" body=""
	I1210 06:29:29.877661  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:29.878000  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:30.377937  401365 type.go:168] "Request Body" body=""
	I1210 06:29:30.378012  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:30.378326  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:30.378387  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:30.877919  401365 type.go:168] "Request Body" body=""
	I1210 06:29:30.878019  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:30.878352  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:31.377915  401365 type.go:168] "Request Body" body=""
	I1210 06:29:31.378002  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:31.378351  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:31.878025  401365 type.go:168] "Request Body" body=""
	I1210 06:29:31.878128  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:31.878461  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:32.378235  401365 type.go:168] "Request Body" body=""
	I1210 06:29:32.378305  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:32.378637  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:32.378712  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:32.878497  401365 type.go:168] "Request Body" body=""
	I1210 06:29:32.878570  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:32.878897  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:33.378428  401365 type.go:168] "Request Body" body=""
	I1210 06:29:33.378500  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:33.378769  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:33.877562  401365 type.go:168] "Request Body" body=""
	I1210 06:29:33.877640  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:33.877963  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:34.377713  401365 type.go:168] "Request Body" body=""
	I1210 06:29:34.377821  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:34.378167  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:34.877924  401365 type.go:168] "Request Body" body=""
	I1210 06:29:34.877993  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:34.878306  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:34.878365  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:35.378234  401365 type.go:168] "Request Body" body=""
	I1210 06:29:35.378332  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:35.378669  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:35.878465  401365 type.go:168] "Request Body" body=""
	I1210 06:29:35.878539  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:35.878861  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:36.378415  401365 type.go:168] "Request Body" body=""
	I1210 06:29:36.378520  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:36.378846  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:36.877600  401365 type.go:168] "Request Body" body=""
	I1210 06:29:36.877689  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:36.878017  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:37.377713  401365 type.go:168] "Request Body" body=""
	I1210 06:29:37.377800  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:37.378154  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:37.378223  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:37.878379  401365 type.go:168] "Request Body" body=""
	I1210 06:29:37.878466  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:37.878806  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:38.378634  401365 type.go:168] "Request Body" body=""
	I1210 06:29:38.378721  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:38.379089  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:38.877647  401365 type.go:168] "Request Body" body=""
	I1210 06:29:38.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:38.878098  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:39.377834  401365 type.go:168] "Request Body" body=""
	I1210 06:29:39.377905  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:39.378160  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:39.878109  401365 type.go:168] "Request Body" body=""
	I1210 06:29:39.878184  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:39.878538  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:39.878595  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:40.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:29:40.378476  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:40.378793  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:40.878462  401365 type.go:168] "Request Body" body=""
	I1210 06:29:40.878582  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:40.878971  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:41.377652  401365 type.go:168] "Request Body" body=""
	I1210 06:29:41.377732  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:41.378135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:41.877884  401365 type.go:168] "Request Body" body=""
	I1210 06:29:41.877962  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:41.878325  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:42.377611  401365 type.go:168] "Request Body" body=""
	I1210 06:29:42.377693  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:42.378065  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:42.378123  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:42.877666  401365 type.go:168] "Request Body" body=""
	I1210 06:29:42.877738  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:42.878090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:43.377808  401365 type.go:168] "Request Body" body=""
	I1210 06:29:43.377882  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:43.378222  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:43.877625  401365 type.go:168] "Request Body" body=""
	I1210 06:29:43.877697  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:43.877990  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:44.377652  401365 type.go:168] "Request Body" body=""
	I1210 06:29:44.377728  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:44.378065  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:44.877919  401365 type.go:168] "Request Body" body=""
	I1210 06:29:44.878017  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:44.878351  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:44.878422  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:45.378184  401365 type.go:168] "Request Body" body=""
	I1210 06:29:45.378259  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:45.378530  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:45.878292  401365 type.go:168] "Request Body" body=""
	I1210 06:29:45.878369  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:45.878717  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:46.378381  401365 type.go:168] "Request Body" body=""
	I1210 06:29:46.378455  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:46.378778  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:46.878419  401365 type.go:168] "Request Body" body=""
	I1210 06:29:46.878504  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:46.878818  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:46.878868  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:47.377582  401365 type.go:168] "Request Body" body=""
	I1210 06:29:47.377662  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:47.378008  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:47.878425  401365 type.go:168] "Request Body" body=""
	I1210 06:29:47.878508  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:47.878839  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:48.378385  401365 type.go:168] "Request Body" body=""
	I1210 06:29:48.378477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:48.378769  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:48.878588  401365 type.go:168] "Request Body" body=""
	I1210 06:29:48.878669  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:48.878986  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:48.879047  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:49.377711  401365 type.go:168] "Request Body" body=""
	I1210 06:29:49.377790  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:49.378153  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:49.878038  401365 type.go:168] "Request Body" body=""
	I1210 06:29:49.878111  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:49.878364  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:50.096947  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:29:50.160267  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:50.160316  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:50.160396  401365 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:29:50.163553  401365 out.go:179] * Enabled addons: 
	I1210 06:29:50.167218  401365 addons.go:530] duration metric: took 1m39.789022145s for enable addons: enabled=[]
	I1210 06:29:50.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:29:50.377718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:50.378070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:50.877691  401365 type.go:168] "Request Body" body=""
	I1210 06:29:50.877785  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:50.878103  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:51.378394  401365 type.go:168] "Request Body" body=""
	I1210 06:29:51.378472  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:51.378746  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:51.378813  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:51.878588  401365 type.go:168] "Request Body" body=""
	I1210 06:29:51.878669  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:51.878981  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:52.377564  401365 type.go:168] "Request Body" body=""
	I1210 06:29:52.377654  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:52.378002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:52.878385  401365 type.go:168] "Request Body" body=""
	I1210 06:29:52.878456  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:52.878735  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:53.378623  401365 type.go:168] "Request Body" body=""
	I1210 06:29:53.378696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:53.379007  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:53.379062  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:53.877727  401365 type.go:168] "Request Body" body=""
	I1210 06:29:53.877818  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:53.878163  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:54.377608  401365 type.go:168] "Request Body" body=""
	I1210 06:29:54.377697  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:54.378015  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:54.877713  401365 type.go:168] "Request Body" body=""
	I1210 06:29:54.877810  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:54.878175  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:55.377895  401365 type.go:168] "Request Body" body=""
	I1210 06:29:55.377968  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:55.378309  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:55.877991  401365 type.go:168] "Request Body" body=""
	I1210 06:29:55.878064  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:55.878416  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:55.878476  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:56.378216  401365 type.go:168] "Request Body" body=""
	I1210 06:29:56.378295  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:56.378666  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:56.878479  401365 type.go:168] "Request Body" body=""
	I1210 06:29:56.878557  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:56.878903  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:57.378397  401365 type.go:168] "Request Body" body=""
	I1210 06:29:57.378477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:57.378742  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:57.878382  401365 type.go:168] "Request Body" body=""
	I1210 06:29:57.878456  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:57.878755  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:57.878801  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:58.378559  401365 type.go:168] "Request Body" body=""
	I1210 06:29:58.378645  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:58.378936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:58.877600  401365 type.go:168] "Request Body" body=""
	I1210 06:29:58.877684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:58.877957  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:59.377641  401365 type.go:168] "Request Body" body=""
	I1210 06:29:59.377718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:59.378043  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:59.878036  401365 type.go:168] "Request Body" body=""
	I1210 06:29:59.878111  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:59.878453  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:00.403040  401365 type.go:168] "Request Body" body=""
	I1210 06:30:00.403489  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:00.403971  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:00.404065  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:00.877628  401365 type.go:168] "Request Body" body=""
	I1210 06:30:00.877715  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:00.878111  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:01.378405  401365 type.go:168] "Request Body" body=""
	I1210 06:30:01.378490  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:01.378858  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:01.878587  401365 type.go:168] "Request Body" body=""
	I1210 06:30:01.878670  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:01.879048  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:02.377809  401365 type.go:168] "Request Body" body=""
	I1210 06:30:02.377884  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:02.378218  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:02.877618  401365 type.go:168] "Request Body" body=""
	I1210 06:30:02.877691  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:02.877969  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:02.878012  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:03.377736  401365 type.go:168] "Request Body" body=""
	I1210 06:30:03.377819  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:03.378180  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:03.877919  401365 type.go:168] "Request Body" body=""
	I1210 06:30:03.878016  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:03.878393  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:04.378222  401365 type.go:168] "Request Body" body=""
	I1210 06:30:04.378313  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:04.378635  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:04.878431  401365 type.go:168] "Request Body" body=""
	I1210 06:30:04.878505  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:04.879753  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1210 06:30:04.879813  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:05.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:30:05.378482  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:05.378830  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:05.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:30:05.878480  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:05.878741  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:06.378628  401365 type.go:168] "Request Body" body=""
	I1210 06:30:06.378703  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:06.379023  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:06.877690  401365 type.go:168] "Request Body" body=""
	I1210 06:30:06.877764  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:06.878119  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:07.377808  401365 type.go:168] "Request Body" body=""
	I1210 06:30:07.377895  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:07.378245  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:07.378302  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:07.877669  401365 type.go:168] "Request Body" body=""
	I1210 06:30:07.877760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:07.878098  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:08.377849  401365 type.go:168] "Request Body" body=""
	I1210 06:30:08.377929  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:08.378272  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:08.877616  401365 type.go:168] "Request Body" body=""
	I1210 06:30:08.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:08.878027  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:09.378016  401365 type.go:168] "Request Body" body=""
	I1210 06:30:09.378098  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:09.378433  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:09.378480  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:09.878345  401365 type.go:168] "Request Body" body=""
	I1210 06:30:09.878427  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:09.878801  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:10.378580  401365 type.go:168] "Request Body" body=""
	I1210 06:30:10.378704  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:10.379089  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:10.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:30:10.877745  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:10.878101  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:11.377836  401365 type.go:168] "Request Body" body=""
	I1210 06:30:11.377918  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:11.378278  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:11.877990  401365 type.go:168] "Request Body" body=""
	I1210 06:30:11.878058  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:11.878328  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:11.878370  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:12.377682  401365 type.go:168] "Request Body" body=""
	I1210 06:30:12.377762  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:12.378131  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:12.877864  401365 type.go:168] "Request Body" body=""
	I1210 06:30:12.877940  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:12.878290  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:13.377986  401365 type.go:168] "Request Body" body=""
	I1210 06:30:13.378060  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:13.378390  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:13.878180  401365 type.go:168] "Request Body" body=""
	I1210 06:30:13.878256  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:13.878586  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:13.878648  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:14.378401  401365 type.go:168] "Request Body" body=""
	I1210 06:30:14.378479  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:14.378827  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:14.878395  401365 type.go:168] "Request Body" body=""
	I1210 06:30:14.878479  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:14.878758  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:15.378543  401365 type.go:168] "Request Body" body=""
	I1210 06:30:15.378623  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:15.378945  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:15.877679  401365 type.go:168] "Request Body" body=""
	I1210 06:30:15.877759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:15.878101  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:16.377593  401365 type.go:168] "Request Body" body=""
	I1210 06:30:16.377664  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:16.377962  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:16.378009  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:16.877684  401365 type.go:168] "Request Body" body=""
	I1210 06:30:16.877760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:16.878132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:17.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:30:17.377724  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:17.378099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:17.877591  401365 type.go:168] "Request Body" body=""
	I1210 06:30:17.877703  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:17.878030  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:18.377710  401365 type.go:168] "Request Body" body=""
	I1210 06:30:18.377789  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:18.378142  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:18.378208  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:18.877756  401365 type.go:168] "Request Body" body=""
	I1210 06:30:18.877843  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:18.878196  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:19.377801  401365 type.go:168] "Request Body" body=""
	I1210 06:30:19.377880  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:19.378158  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:19.878182  401365 type.go:168] "Request Body" body=""
	I1210 06:30:19.878260  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:19.878613  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:20.378479  401365 type.go:168] "Request Body" body=""
	I1210 06:30:20.378562  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:20.378922  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:20.378995  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:20.878437  401365 type.go:168] "Request Body" body=""
	I1210 06:30:20.878515  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:20.878801  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:21.378593  401365 type.go:168] "Request Body" body=""
	I1210 06:30:21.378678  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:21.379014  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:21.877727  401365 type.go:168] "Request Body" body=""
	I1210 06:30:21.877805  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:21.878139  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:22.377644  401365 type.go:168] "Request Body" body=""
	I1210 06:30:22.377720  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:22.378036  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:22.877631  401365 type.go:168] "Request Body" body=""
	I1210 06:30:22.877708  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:22.878077  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:22.878133  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:23.377664  401365 type.go:168] "Request Body" body=""
	I1210 06:30:23.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:23.378132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:23.877609  401365 type.go:168] "Request Body" body=""
	I1210 06:30:23.877684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:23.878013  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:24.377728  401365 type.go:168] "Request Body" body=""
	I1210 06:30:24.377803  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:24.378189  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:24.878072  401365 type.go:168] "Request Body" body=""
	I1210 06:30:24.878208  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:24.878537  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:24.878592  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:25.378359  401365 type.go:168] "Request Body" body=""
	I1210 06:30:25.378444  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:25.378710  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:25.878517  401365 type.go:168] "Request Body" body=""
	I1210 06:30:25.878613  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:25.878961  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:26.377659  401365 type.go:168] "Request Body" body=""
	I1210 06:30:26.377737  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:26.378086  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:26.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:30:26.878468  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:26.878744  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:26.878791  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:27.378535  401365 type.go:168] "Request Body" body=""
	I1210 06:30:27.378611  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:27.378947  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:27.877649  401365 type.go:168] "Request Body" body=""
	I1210 06:30:27.877732  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:27.878085  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:28.377643  401365 type.go:168] "Request Body" body=""
	I1210 06:30:28.377718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:28.378171  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:28.877894  401365 type.go:168] "Request Body" body=""
	I1210 06:30:28.877977  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:28.878324  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:29.378072  401365 type.go:168] "Request Body" body=""
	I1210 06:30:29.378156  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:29.378530  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:29.378586  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:29.878257  401365 type.go:168] "Request Body" body=""
	I1210 06:30:29.878331  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:29.878620  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:30.377624  401365 type.go:168] "Request Body" body=""
	I1210 06:30:30.377719  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:30.378103  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:30.877807  401365 type.go:168] "Request Body" body=""
	I1210 06:30:30.877939  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:30.878264  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:31.377983  401365 type.go:168] "Request Body" body=""
	I1210 06:30:31.378059  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:31.378337  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:31.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:30:31.877748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:31.878104  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:31.878164  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:32.377881  401365 type.go:168] "Request Body" body=""
	I1210 06:30:32.377966  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:32.378312  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:32.877995  401365 type.go:168] "Request Body" body=""
	I1210 06:30:32.878071  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:32.878437  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:33.378235  401365 type.go:168] "Request Body" body=""
	I1210 06:30:33.378311  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:33.378664  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:33.878393  401365 type.go:168] "Request Body" body=""
	I1210 06:30:33.878477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:33.878789  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:33.878839  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:34.378385  401365 type.go:168] "Request Body" body=""
	I1210 06:30:34.378460  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:34.378856  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:34.877875  401365 type.go:168] "Request Body" body=""
	I1210 06:30:34.877953  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:34.878307  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:35.377692  401365 type.go:168] "Request Body" body=""
	I1210 06:30:35.377807  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:35.378225  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:35.877600  401365 type.go:168] "Request Body" body=""
	I1210 06:30:35.877677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:35.878020  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:36.377715  401365 type.go:168] "Request Body" body=""
	I1210 06:30:36.377791  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:36.378143  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:36.378205  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:36.877696  401365 type.go:168] "Request Body" body=""
	I1210 06:30:36.877774  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:36.878137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:37.378398  401365 type.go:168] "Request Body" body=""
	I1210 06:30:37.378477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:37.378746  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:37.878553  401365 type.go:168] "Request Body" body=""
	I1210 06:30:37.878672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:37.879091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:38.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:30:38.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:38.378109  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:38.877617  401365 type.go:168] "Request Body" body=""
	I1210 06:30:38.877690  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:38.877965  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:38.878020  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:39.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:30:39.377729  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:39.378078  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:39.877852  401365 type.go:168] "Request Body" body=""
	I1210 06:30:39.878005  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:39.878296  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:40.378297  401365 type.go:168] "Request Body" body=""
	I1210 06:30:40.378419  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:40.378746  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:40.878609  401365 type.go:168] "Request Body" body=""
	I1210 06:30:40.878695  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:40.879047  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:40.879109  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:41.377677  401365 type.go:168] "Request Body" body=""
	I1210 06:30:41.377761  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:41.378136  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:41.877816  401365 type.go:168] "Request Body" body=""
	I1210 06:30:41.877897  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:41.878247  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:42.377681  401365 type.go:168] "Request Body" body=""
	I1210 06:30:42.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:42.378160  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:42.877905  401365 type.go:168] "Request Body" body=""
	I1210 06:30:42.877988  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:42.878334  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:43.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:30:43.377686  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:43.378002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:43.378054  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:43.877669  401365 type.go:168] "Request Body" body=""
	I1210 06:30:43.877751  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:43.878145  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:44.377872  401365 type.go:168] "Request Body" body=""
	I1210 06:30:44.377977  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:44.378341  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:44.878225  401365 type.go:168] "Request Body" body=""
	I1210 06:30:44.878299  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:44.878563  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:45.378360  401365 type.go:168] "Request Body" body=""
	I1210 06:30:45.378435  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:45.378860  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:45.378937  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:45.878557  401365 type.go:168] "Request Body" body=""
	I1210 06:30:45.878640  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:45.878996  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:46.378357  401365 type.go:168] "Request Body" body=""
	I1210 06:30:46.378429  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:46.378738  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:46.878533  401365 type.go:168] "Request Body" body=""
	I1210 06:30:46.878610  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:46.878947  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:47.377691  401365 type.go:168] "Request Body" body=""
	I1210 06:30:47.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:47.378135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:47.878384  401365 type.go:168] "Request Body" body=""
	I1210 06:30:47.878498  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:47.878783  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:47.878827  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:48.378583  401365 type.go:168] "Request Body" body=""
	I1210 06:30:48.378670  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:48.379006  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:48.877596  401365 type.go:168] "Request Body" body=""
	I1210 06:30:48.877674  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:48.878023  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:49.377609  401365 type.go:168] "Request Body" body=""
	I1210 06:30:49.377685  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:49.377965  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:49.877909  401365 type.go:168] "Request Body" body=""
	I1210 06:30:49.877985  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:49.878310  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:50.378111  401365 type.go:168] "Request Body" body=""
	I1210 06:30:50.378203  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:50.378557  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:50.378619  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:50.878363  401365 type.go:168] "Request Body" body=""
	I1210 06:30:50.878438  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:50.878702  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:51.378562  401365 type.go:168] "Request Body" body=""
	I1210 06:30:51.378644  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:51.378985  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:51.877673  401365 type.go:168] "Request Body" body=""
	I1210 06:30:51.877755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:51.878129  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:52.377603  401365 type.go:168] "Request Body" body=""
	I1210 06:30:52.377672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:52.377985  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:52.877662  401365 type.go:168] "Request Body" body=""
	I1210 06:30:52.877741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:52.878113  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:52.878172  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:53.377842  401365 type.go:168] "Request Body" body=""
	I1210 06:30:53.377929  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:53.378271  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:53.877988  401365 type.go:168] "Request Body" body=""
	I1210 06:30:53.878059  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:53.878397  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:54.378229  401365 type.go:168] "Request Body" body=""
	I1210 06:30:54.378302  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:54.378632  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:54.878381  401365 type.go:168] "Request Body" body=""
	I1210 06:30:54.878460  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:54.878809  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:54.878867  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:55.378406  401365 type.go:168] "Request Body" body=""
	I1210 06:30:55.378491  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:55.378761  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:55.878532  401365 type.go:168] "Request Body" body=""
	I1210 06:30:55.878631  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:55.878979  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:56.377687  401365 type.go:168] "Request Body" body=""
	I1210 06:30:56.377765  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:56.378102  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:56.878412  401365 type.go:168] "Request Body" body=""
	I1210 06:30:56.878480  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:56.878765  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:57.378590  401365 type.go:168] "Request Body" body=""
	I1210 06:30:57.378667  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:57.379010  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:57.379066  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:57.877659  401365 type.go:168] "Request Body" body=""
	I1210 06:30:57.877736  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:57.878094  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:58.377804  401365 type.go:168] "Request Body" body=""
	I1210 06:30:58.377882  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:58.378161  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:58.877653  401365 type.go:168] "Request Body" body=""
	I1210 06:30:58.877724  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:58.878038  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:59.377678  401365 type.go:168] "Request Body" body=""
	I1210 06:30:59.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:59.378090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:59.878022  401365 type.go:168] "Request Body" body=""
	I1210 06:30:59.878105  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:59.878446  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:59.878509  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:00.377586  401365 type.go:168] "Request Body" body=""
	I1210 06:31:00.377680  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:00.378151  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:00.877892  401365 type.go:168] "Request Body" body=""
	I1210 06:31:00.877975  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:00.878336  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:01.377928  401365 type.go:168] "Request Body" body=""
	I1210 06:31:01.378000  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:01.378269  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:01.877906  401365 type.go:168] "Request Body" body=""
	I1210 06:31:01.877996  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:01.878438  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:02.377746  401365 type.go:168] "Request Body" body=""
	I1210 06:31:02.377823  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:02.378191  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:02.378256  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:02.878389  401365 type.go:168] "Request Body" body=""
	I1210 06:31:02.878461  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:02.878756  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:03.378549  401365 type.go:168] "Request Body" body=""
	I1210 06:31:03.378628  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:03.378977  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:03.877675  401365 type.go:168] "Request Body" body=""
	I1210 06:31:03.877754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:03.878104  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:04.377643  401365 type.go:168] "Request Body" body=""
	I1210 06:31:04.377719  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:04.378069  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:04.878124  401365 type.go:168] "Request Body" body=""
	I1210 06:31:04.878218  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:04.878572  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:04.878635  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:05.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:31:05.378481  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:05.378786  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:05.878376  401365 type.go:168] "Request Body" body=""
	I1210 06:31:05.878447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:05.878782  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:06.378579  401365 type.go:168] "Request Body" body=""
	I1210 06:31:06.378655  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:06.379033  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:06.877752  401365 type.go:168] "Request Body" body=""
	I1210 06:31:06.877828  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:06.878145  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:07.377614  401365 type.go:168] "Request Body" body=""
	I1210 06:31:07.377703  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:07.378053  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:07.378103  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:07.877679  401365 type.go:168] "Request Body" body=""
	I1210 06:31:07.877774  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:07.878132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:08.377692  401365 type.go:168] "Request Body" body=""
	I1210 06:31:08.377773  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:08.378135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:08.877811  401365 type.go:168] "Request Body" body=""
	I1210 06:31:08.877884  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:08.878180  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:09.377668  401365 type.go:168] "Request Body" body=""
	I1210 06:31:09.377754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:09.378101  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:09.378155  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:09.877923  401365 type.go:168] "Request Body" body=""
	I1210 06:31:09.877999  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:09.878321  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:10.378307  401365 type.go:168] "Request Body" body=""
	I1210 06:31:10.378386  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:10.378650  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:10.878423  401365 type.go:168] "Request Body" body=""
	I1210 06:31:10.878500  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:10.878869  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:11.378503  401365 type.go:168] "Request Body" body=""
	I1210 06:31:11.378584  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:11.378952  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:11.379008  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:11.878378  401365 type.go:168] "Request Body" body=""
	I1210 06:31:11.878450  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:11.878715  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:12.378514  401365 type.go:168] "Request Body" body=""
	I1210 06:31:12.378591  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:12.378905  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:12.877633  401365 type.go:168] "Request Body" body=""
	I1210 06:31:12.877711  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:12.878080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:13.378362  401365 type.go:168] "Request Body" body=""
	I1210 06:31:13.378431  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:13.378728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:13.878515  401365 type.go:168] "Request Body" body=""
	I1210 06:31:13.878592  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:13.878918  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:13.878976  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:14.377681  401365 type.go:168] "Request Body" body=""
	I1210 06:31:14.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:14.378115  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:14.878072  401365 type.go:168] "Request Body" body=""
	I1210 06:31:14.878147  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:14.878405  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:15.378262  401365 type.go:168] "Request Body" body=""
	I1210 06:31:15.378345  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:15.378686  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:15.878492  401365 type.go:168] "Request Body" body=""
	I1210 06:31:15.878569  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:15.878935  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:16.378356  401365 type.go:168] "Request Body" body=""
	I1210 06:31:16.378441  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:16.378690  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:16.378731  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:16.878535  401365 type.go:168] "Request Body" body=""
	I1210 06:31:16.878609  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:16.878944  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:17.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:31:17.377753  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:17.378118  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:17.877723  401365 type.go:168] "Request Body" body=""
	I1210 06:31:17.877797  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:17.878157  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:18.377666  401365 type.go:168] "Request Body" body=""
	I1210 06:31:18.377744  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:18.378082  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:18.877660  401365 type.go:168] "Request Body" body=""
	I1210 06:31:18.877734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:18.878083  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:18.878141  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:19.378341  401365 type.go:168] "Request Body" body=""
	I1210 06:31:19.378417  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:19.378680  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:19.878385  401365 type.go:168] "Request Body" body=""
	I1210 06:31:19.878484  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:19.878844  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:20.377537  401365 type.go:168] "Request Body" body=""
	I1210 06:31:20.377620  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:20.377967  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:20.877662  401365 type.go:168] "Request Body" body=""
	I1210 06:31:20.877748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:20.878176  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:20.878224  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:21.377644  401365 type.go:168] "Request Body" body=""
	I1210 06:31:21.377723  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:21.378064  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:21.877799  401365 type.go:168] "Request Body" body=""
	I1210 06:31:21.877892  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:21.878256  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:22.377991  401365 type.go:168] "Request Body" body=""
	I1210 06:31:22.378069  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:22.378361  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:22.877671  401365 type.go:168] "Request Body" body=""
	I1210 06:31:22.877765  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:22.878106  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:23.377677  401365 type.go:168] "Request Body" body=""
	I1210 06:31:23.377772  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:23.378156  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:23.378228  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:23.877603  401365 type.go:168] "Request Body" body=""
	I1210 06:31:23.877676  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:23.877972  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:24.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:31:24.377753  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:24.378120  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:24.877983  401365 type.go:168] "Request Body" body=""
	I1210 06:31:24.878078  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:24.878441  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:25.378227  401365 type.go:168] "Request Body" body=""
	I1210 06:31:25.378296  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:25.378552  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:25.378598  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:25.878364  401365 type.go:168] "Request Body" body=""
	I1210 06:31:25.878444  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:25.878791  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:26.377537  401365 type.go:168] "Request Body" body=""
	I1210 06:31:26.377611  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:26.377959  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:26.878388  401365 type.go:168] "Request Body" body=""
	I1210 06:31:26.878456  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:26.878725  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:27.378513  401365 type.go:168] "Request Body" body=""
	I1210 06:31:27.378591  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:27.378938  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:27.378993  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:27.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:31:27.877739  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:27.878092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:28.378425  401365 type.go:168] "Request Body" body=""
	I1210 06:31:28.378506  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:28.378821  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:28.877546  401365 type.go:168] "Request Body" body=""
	I1210 06:31:28.877631  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:28.878002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:29.377725  401365 type.go:168] "Request Body" body=""
	I1210 06:31:29.377802  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:29.378176  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:29.878060  401365 type.go:168] "Request Body" body=""
	I1210 06:31:29.878133  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:29.878404  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:29.878448  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:30.378433  401365 type.go:168] "Request Body" body=""
	I1210 06:31:30.378508  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:30.378874  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:30.877621  401365 type.go:168] "Request Body" body=""
	I1210 06:31:30.877699  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:30.878072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:31.377633  401365 type.go:168] "Request Body" body=""
	I1210 06:31:31.377704  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:31.378026  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:31.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:31:31.877748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:31.878092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:32.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:31:32.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:32.378156  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:32.378215  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:32.878393  401365 type.go:168] "Request Body" body=""
	I1210 06:31:32.878461  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:32.878721  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:33.378508  401365 type.go:168] "Request Body" body=""
	I1210 06:31:33.378585  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:33.379111  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:33.877686  401365 type.go:168] "Request Body" body=""
	I1210 06:31:33.877775  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:33.878146  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:34.377671  401365 type.go:168] "Request Body" body=""
	I1210 06:31:34.377743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:34.378043  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:34.877949  401365 type.go:168] "Request Body" body=""
	I1210 06:31:34.878028  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:34.878374  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:34.878438  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:35.378226  401365 type.go:168] "Request Body" body=""
	I1210 06:31:35.378306  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:35.378649  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:35.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:31:35.878471  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:35.878748  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:36.378551  401365 type.go:168] "Request Body" body=""
	I1210 06:31:36.378631  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:36.378948  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:36.877548  401365 type.go:168] "Request Body" body=""
	I1210 06:31:36.877626  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:36.877995  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:37.378404  401365 type.go:168] "Request Body" body=""
	I1210 06:31:37.378472  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:37.378739  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:37.378783  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:37.878571  401365 type.go:168] "Request Body" body=""
	I1210 06:31:37.878646  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:37.878969  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:38.378416  401365 type.go:168] "Request Body" body=""
	I1210 06:31:38.378491  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:38.378834  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:38.878423  401365 type.go:168] "Request Body" body=""
	I1210 06:31:38.878499  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:38.878770  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:39.378611  401365 type.go:168] "Request Body" body=""
	I1210 06:31:39.378694  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:39.379044  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:39.379105  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:39.878018  401365 type.go:168] "Request Body" body=""
	I1210 06:31:39.878102  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:39.878461  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:40.378264  401365 type.go:168] "Request Body" body=""
	I1210 06:31:40.378348  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:40.378617  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:40.878427  401365 type.go:168] "Request Body" body=""
	I1210 06:31:40.878510  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:40.878851  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:41.377583  401365 type.go:168] "Request Body" body=""
	I1210 06:31:41.377658  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:41.377991  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:41.877560  401365 type.go:168] "Request Body" body=""
	I1210 06:31:41.877633  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:41.877903  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:41.877948  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:42.377649  401365 type.go:168] "Request Body" body=""
	I1210 06:31:42.377727  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:42.378093  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:42.877664  401365 type.go:168] "Request Body" body=""
	I1210 06:31:42.877739  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:42.878032  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:43.378436  401365 type.go:168] "Request Body" body=""
	I1210 06:31:43.378507  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:43.378831  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:43.878454  401365 type.go:168] "Request Body" body=""
	I1210 06:31:43.878546  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:43.878900  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:43.878962  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:44.378527  401365 type.go:168] "Request Body" body=""
	I1210 06:31:44.378608  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:44.378911  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:44.877852  401365 type.go:168] "Request Body" body=""
	I1210 06:31:44.877944  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:44.878230  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:45.377683  401365 type.go:168] "Request Body" body=""
	I1210 06:31:45.377757  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:45.378232  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:45.877964  401365 type.go:168] "Request Body" body=""
	I1210 06:31:45.878060  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:45.878412  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:46.378182  401365 type.go:168] "Request Body" body=""
	I1210 06:31:46.378267  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.378573  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:46.378621  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:46.878427  401365 type.go:168] "Request Body" body=""
	I1210 06:31:46.878510  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.878849  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.378554  401365 type.go:168] "Request Body" body=""
	I1210 06:31:47.378637  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.379010  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.878381  401365 type.go:168] "Request Body" body=""
	I1210 06:31:47.878453  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.878751  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:48.378580  401365 type.go:168] "Request Body" body=""
	I1210 06:31:48.378660  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.378984  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:48.379037  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:48.877565  401365 type.go:168] "Request Body" body=""
	I1210 06:31:48.877642  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.877972  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.378371  401365 type.go:168] "Request Body" body=""
	I1210 06:31:49.378448  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.378712  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.878386  401365 type.go:168] "Request Body" body=""
	I1210 06:31:49.878461  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.878790  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:50.377587  401365 type.go:168] "Request Body" body=""
	I1210 06:31:50.377673  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.378035  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:50.878395  401365 type.go:168] "Request Body" body=""
	I1210 06:31:50.878469  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.878754  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:50.878808  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:51.378548  401365 type.go:168] "Request Body" body=""
	I1210 06:31:51.378627  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.378976  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:51.877595  401365 type.go:168] "Request Body" body=""
	I1210 06:31:51.877677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.878064  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:52.378358  401365 type.go:168] "Request Body" body=""
	I1210 06:31:52.378433  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.378695  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:52.878474  401365 type.go:168] "Request Body" body=""
	I1210 06:31:52.878551  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.878895  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:52.878957  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:53.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:31:53.377721  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.378047  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:53.877607  401365 type.go:168] "Request Body" body=""
	I1210 06:31:53.877682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.878066  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:31:54.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.378069  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.877984  401365 type.go:168] "Request Body" body=""
	I1210 06:31:54.878068  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.878451  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:55.378227  401365 type.go:168] "Request Body" body=""
	I1210 06:31:55.378305  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.378567  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:55.378612  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:55.878449  401365 type.go:168] "Request Body" body=""
	I1210 06:31:55.878524  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.878878  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.377607  401365 type.go:168] "Request Body" body=""
	I1210 06:31:56.377684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.378031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:31:56.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.878731  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:57.378523  401365 type.go:168] "Request Body" body=""
	I1210 06:31:57.378605  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.378963  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:57.379024  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:57.878422  401365 type.go:168] "Request Body" body=""
	I1210 06:31:57.878496  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.878837  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:58.378369  401365 type.go:168] "Request Body" body=""
	I1210 06:31:58.378450  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.378724  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:58.878516  401365 type.go:168] "Request Body" body=""
	I1210 06:31:58.878590  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.878936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:31:59.377756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.378079  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.878003  401365 type.go:168] "Request Body" body=""
	I1210 06:31:59.878079  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.878346  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:59.878388  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:00.378620  401365 type.go:168] "Request Body" body=""
	I1210 06:32:00.378720  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.379187  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:00.877753  401365 type.go:168] "Request Body" body=""
	I1210 06:32:00.877830  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.878187  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.377623  401365 type.go:168] "Request Body" body=""
	I1210 06:32:01.377694  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.377960  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.877717  401365 type.go:168] "Request Body" body=""
	I1210 06:32:01.877791  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.878137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:02.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:32:02.377750  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.378099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:02.378152  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:02.878416  401365 type.go:168] "Request Body" body=""
	I1210 06:32:02.878493  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.878764  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.378615  401365 type.go:168] "Request Body" body=""
	I1210 06:32:03.378694  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.379016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.877719  401365 type.go:168] "Request Body" body=""
	I1210 06:32:03.877801  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.878168  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:04.377604  401365 type.go:168] "Request Body" body=""
	I1210 06:32:04.377682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.378022  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:04.878029  401365 type.go:168] "Request Body" body=""
	I1210 06:32:04.878113  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.878426  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:04.878477  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:05.378217  401365 type.go:168] "Request Body" body=""
	I1210 06:32:05.378293  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.378623  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:05.878242  401365 type.go:168] "Request Body" body=""
	I1210 06:32:05.878313  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.878586  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:06.378446  401365 type.go:168] "Request Body" body=""
	I1210 06:32:06.378528  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.378861  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:06.877578  401365 type.go:168] "Request Body" body=""
	I1210 06:32:06.877651  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.877991  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:07.378348  401365 type.go:168] "Request Body" body=""
	I1210 06:32:07.378430  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.378696  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:07.378747  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:07.878485  401365 type.go:168] "Request Body" body=""
	I1210 06:32:07.878566  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.878891  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.377683  401365 type.go:168] "Request Body" body=""
	I1210 06:32:08.377758  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.378068  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.877617  401365 type.go:168] "Request Body" body=""
	I1210 06:32:08.877686  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.877996  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:09.377667  401365 type.go:168] "Request Body" body=""
	I1210 06:32:09.377749  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.378070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:09.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:32:09.878526  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.878847  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:09.878895  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:10.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:32:10.377695  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.377992  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:10.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:32:10.877747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.878107  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:11.377752  401365 type.go:168] "Request Body" body=""
	I1210 06:32:11.377832  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.378194  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:11.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:32:11.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.878721  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:12.378536  401365 type.go:168] "Request Body" body=""
	I1210 06:32:12.378609  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.379037  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:12.379094  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:12.877634  401365 type.go:168] "Request Body" body=""
	I1210 06:32:12.877718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.878024  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:13.377615  401365 type.go:168] "Request Body" body=""
	I1210 06:32:13.377684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.377949  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:13.877636  401365 type.go:168] "Request Body" body=""
	I1210 06:32:13.877713  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.878091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.377642  401365 type.go:168] "Request Body" body=""
	I1210 06:32:14.377717  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.378074  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.877991  401365 type.go:168] "Request Body" body=""
	I1210 06:32:14.878073  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.878405  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:14.878468  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:15.378244  401365 type.go:168] "Request Body" body=""
	I1210 06:32:15.378316  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.378669  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:15.878506  401365 type.go:168] "Request Body" body=""
	I1210 06:32:15.878598  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.878952  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:16.378402  401365 type.go:168] "Request Body" body=""
	I1210 06:32:16.378473  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.378735  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:16.878581  401365 type.go:168] "Request Body" body=""
	I1210 06:32:16.878668  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.879029  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:16.879085  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:17.377664  401365 type.go:168] "Request Body" body=""
	I1210 06:32:17.377738  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.378065  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:17.877603  401365 type.go:168] "Request Body" body=""
	I1210 06:32:17.877677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.877943  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:18.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:32:18.377754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.378106  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:18.877827  401365 type.go:168] "Request Body" body=""
	I1210 06:32:18.877917  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.878299  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:19.377981  401365 type.go:168] "Request Body" body=""
	I1210 06:32:19.378062  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.378390  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:19.378451  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:19.878242  401365 type.go:168] "Request Body" body=""
	I1210 06:32:19.878318  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.878664  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.377555  401365 type.go:168] "Request Body" body=""
	I1210 06:32:20.377633  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.377966  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.877592  401365 type.go:168] "Request Body" body=""
	I1210 06:32:20.877663  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.878022  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:21.377596  401365 type.go:168] "Request Body" body=""
	I1210 06:32:21.377677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.377971  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:21.877671  401365 type.go:168] "Request Body" body=""
	I1210 06:32:21.877747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.878078  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:21.878135  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:22.377603  401365 type.go:168] "Request Body" body=""
	I1210 06:32:22.377681  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.377998  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:22.877713  401365 type.go:168] "Request Body" body=""
	I1210 06:32:22.877789  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.878146  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.378586  401365 type.go:168] "Request Body" body=""
	I1210 06:32:23.378663  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.379023  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.877627  401365 type.go:168] "Request Body" body=""
	I1210 06:32:23.877698  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.878027  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:24.377671  401365 type.go:168] "Request Body" body=""
	I1210 06:32:24.377755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.378140  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:24.378210  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:24.878158  401365 type.go:168] "Request Body" body=""
	I1210 06:32:24.878240  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.878611  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:25.378254  401365 type.go:168] "Request Body" body=""
	I1210 06:32:25.378329  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.378601  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:25.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:32:25.878460  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.878767  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:26.378460  401365 type.go:168] "Request Body" body=""
	I1210 06:32:26.378534  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.378923  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:26.378977  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:26.878379  401365 type.go:168] "Request Body" body=""
	I1210 06:32:26.878453  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.878804  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:27.378593  401365 type.go:168] "Request Body" body=""
	I1210 06:32:27.378674  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.379034  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:27.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:32:27.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.878091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.378401  401365 type.go:168] "Request Body" body=""
	I1210 06:32:28.378470  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.378735  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.878509  401365 type.go:168] "Request Body" body=""
	I1210 06:32:28.878592  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.878904  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:28.878959  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:29.377676  401365 type.go:168] "Request Body" body=""
	I1210 06:32:29.377758  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.378099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:29.877932  401365 type.go:168] "Request Body" body=""
	I1210 06:32:29.878011  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.878331  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.378437  401365 type.go:168] "Request Body" body=""
	I1210 06:32:30.378520  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.378881  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.877601  401365 type.go:168] "Request Body" body=""
	I1210 06:32:30.877679  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.877997  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:31.378413  401365 type.go:168] "Request Body" body=""
	I1210 06:32:31.378485  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.378800  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:31.378859  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:31.877545  401365 type.go:168] "Request Body" body=""
	I1210 06:32:31.877620  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.877962  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.377685  401365 type.go:168] "Request Body" body=""
	I1210 06:32:32.377765  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.378115  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.878382  401365 type.go:168] "Request Body" body=""
	I1210 06:32:32.878458  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.878718  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:33.378533  401365 type.go:168] "Request Body" body=""
	I1210 06:32:33.378613  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.378973  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:33.379031  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:33.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:32:33.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.878099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:34.377573  401365 type.go:168] "Request Body" body=""
	I1210 06:32:34.377644  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.377911  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:34.877902  401365 type.go:168] "Request Body" body=""
	I1210 06:32:34.877978  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.878339  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.378057  401365 type.go:168] "Request Body" body=""
	I1210 06:32:35.378143  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.378506  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.878224  401365 type.go:168] "Request Body" body=""
	I1210 06:32:35.878295  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.878562  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:35.878604  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:36.378404  401365 type.go:168] "Request Body" body=""
	I1210 06:32:36.378487  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.378840  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:36.877571  401365 type.go:168] "Request Body" body=""
	I1210 06:32:36.877653  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.877994  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.378346  401365 type.go:168] "Request Body" body=""
	I1210 06:32:37.378421  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.378684  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.878461  401365 type.go:168] "Request Body" body=""
	I1210 06:32:37.878543  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.878890  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:37.878952  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:38.378573  401365 type.go:168] "Request Body" body=""
	I1210 06:32:38.378654  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.378951  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:38.878358  401365 type.go:168] "Request Body" body=""
	I1210 06:32:38.878428  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.878691  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.378473  401365 type.go:168] "Request Body" body=""
	I1210 06:32:39.378552  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.378939  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.877654  401365 type.go:168] "Request Body" body=""
	I1210 06:32:39.877738  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.878074  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:40.377853  401365 type.go:168] "Request Body" body=""
	I1210 06:32:40.377926  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.378227  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:40.378275  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:40.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:32:40.877741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.878110  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.377673  401365 type.go:168] "Request Body" body=""
	I1210 06:32:41.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.378080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.878456  401365 type.go:168] "Request Body" body=""
	I1210 06:32:41.878528  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.878849  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:32:42.377701  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.378097  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.877683  401365 type.go:168] "Request Body" body=""
	I1210 06:32:42.877759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.878128  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:42.878186  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:43.378375  401365 type.go:168] "Request Body" body=""
	I1210 06:32:43.378451  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.378720  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:43.878495  401365 type.go:168] "Request Body" body=""
	I1210 06:32:43.878566  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.878911  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.378610  401365 type.go:168] "Request Body" body=""
	I1210 06:32:44.378700  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.379090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.877962  401365 type.go:168] "Request Body" body=""
	I1210 06:32:44.878030  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.878300  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:44.878343  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:45.377682  401365 type.go:168] "Request Body" body=""
	I1210 06:32:45.377763  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.378114  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:45.877818  401365 type.go:168] "Request Body" body=""
	I1210 06:32:45.877892  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.878234  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.377583  401365 type.go:168] "Request Body" body=""
	I1210 06:32:46.377660  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.377917  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:32:46.877751  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.878148  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:47.377793  401365 type.go:168] "Request Body" body=""
	I1210 06:32:47.377870  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.378225  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:47.378277  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:47.877609  401365 type.go:168] "Request Body" body=""
	I1210 06:32:47.877689  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.877999  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:48.377617  401365 type.go:168] "Request Body" body=""
	I1210 06:32:48.377714  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.378121  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:48.877709  401365 type.go:168] "Request Body" body=""
	I1210 06:32:48.877795  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.878141  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.377627  401365 type.go:168] "Request Body" body=""
	I1210 06:32:49.377713  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.378005  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.878006  401365 type.go:168] "Request Body" body=""
	I1210 06:32:49.878085  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.878433  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:49.878488  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:50.378322  401365 type.go:168] "Request Body" body=""
	I1210 06:32:50.378398  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.378718  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:50.878347  401365 type.go:168] "Request Body" body=""
	I1210 06:32:50.878420  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.878687  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.378558  401365 type.go:168] "Request Body" body=""
	I1210 06:32:51.378630  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.378973  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:32:51.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.878061  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:52.377607  401365 type.go:168] "Request Body" body=""
	I1210 06:32:52.377682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.377965  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:52.378014  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:52.877660  401365 type.go:168] "Request Body" body=""
	I1210 06:32:52.877736  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.878070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:53.377675  401365 type.go:168] "Request Body" body=""
	I1210 06:32:53.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.378128  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:53.878388  401365 type.go:168] "Request Body" body=""
	I1210 06:32:53.878462  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.878725  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:54.378466  401365 type.go:168] "Request Body" body=""
	I1210 06:32:54.378536  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.378857  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:54.378913  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:54.877680  401365 type.go:168] "Request Body" body=""
	I1210 06:32:54.877756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.878119  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:55.378458  401365 type.go:168] "Request Body" body=""
	I1210 06:32:55.378526  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.378782  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:55.878544  401365 type.go:168] "Request Body" body=""
	I1210 06:32:55.878626  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.878951  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.377665  401365 type.go:168] "Request Body" body=""
	I1210 06:32:56.377741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.378096  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.878361  401365 type.go:168] "Request Body" body=""
	I1210 06:32:56.878436  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.878736  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:56.878785  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:57.377545  401365 type.go:168] "Request Body" body=""
	I1210 06:32:57.377621  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.377956  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:57.877652  401365 type.go:168] "Request Body" body=""
	I1210 06:32:57.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.878070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.377628  401365 type.go:168] "Request Body" body=""
	I1210 06:32:58.377706  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.378022  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.877657  401365 type.go:168] "Request Body" body=""
	I1210 06:32:58.877735  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.878058  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:59.377658  401365 type.go:168] "Request Body" body=""
	I1210 06:32:59.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.378092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:59.378152  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:59.877990  401365 type.go:168] "Request Body" body=""
	I1210 06:32:59.878106  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.878540  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:00.378642  401365 type.go:168] "Request Body" body=""
	I1210 06:33:00.378734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.379157  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:00.877676  401365 type.go:168] "Request Body" body=""
	I1210 06:33:00.877756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.878108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:01.377615  401365 type.go:168] "Request Body" body=""
	I1210 06:33:01.377687  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.377982  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:01.877579  401365 type.go:168] "Request Body" body=""
	I1210 06:33:01.877659  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.877980  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:01.878035  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:02.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:33:02.377769  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.378112  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:02.877622  401365 type.go:168] "Request Body" body=""
	I1210 06:33:02.877696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.878019  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.378424  401365 type.go:168] "Request Body" body=""
	I1210 06:33:03.378503  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.378851  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.877594  401365 type.go:168] "Request Body" body=""
	I1210 06:33:03.877673  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.878031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:03.878095  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:04.377623  401365 type.go:168] "Request Body" body=""
	I1210 06:33:04.377695  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.378016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:04.878008  401365 type.go:168] "Request Body" body=""
	I1210 06:33:04.878082  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.878402  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.378189  401365 type.go:168] "Request Body" body=""
	I1210 06:33:05.378264  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.378599  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.878376  401365 type.go:168] "Request Body" body=""
	I1210 06:33:05.878455  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.878734  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:05.878779  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:06.378572  401365 type.go:168] "Request Body" body=""
	I1210 06:33:06.378660  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.379002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:06.877676  401365 type.go:168] "Request Body" body=""
	I1210 06:33:06.877754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.878110  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.378442  401365 type.go:168] "Request Body" body=""
	I1210 06:33:07.378521  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.378800  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.877549  401365 type.go:168] "Request Body" body=""
	I1210 06:33:07.877629  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.878000  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:08.377709  401365 type.go:168] "Request Body" body=""
	I1210 06:33:08.377785  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.378149  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:08.378206  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:08.877866  401365 type.go:168] "Request Body" body=""
	I1210 06:33:08.877938  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.878266  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.377997  401365 type.go:168] "Request Body" body=""
	I1210 06:33:09.378074  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.378430  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.878278  401365 type.go:168] "Request Body" body=""
	I1210 06:33:09.878362  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.878709  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:10.378535  401365 type.go:168] "Request Body" body=""
	I1210 06:33:10.378614  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.378892  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:10.378949  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:10.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:10.877696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.878045  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:11.377636  401365 type.go:168] "Request Body" body=""
	I1210 06:33:11.377715  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.378069  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:11.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:33:11.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.878741  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:12.378537  401365 type.go:168] "Request Body" body=""
	I1210 06:33:12.378621  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.378959  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:12.379018  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:12.877685  401365 type.go:168] "Request Body" body=""
	I1210 06:33:12.877767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.878108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:13.377595  401365 type.go:168] "Request Body" body=""
	I1210 06:33:13.377667  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.377991  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:13.877696  401365 type.go:168] "Request Body" body=""
	I1210 06:33:13.877788  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.878233  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.377670  401365 type.go:168] "Request Body" body=""
	I1210 06:33:14.377745  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.378062  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.878087  401365 type.go:168] "Request Body" body=""
	I1210 06:33:14.878167  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.878437  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:14.878481  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:15.378338  401365 type.go:168] "Request Body" body=""
	I1210 06:33:15.378427  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.378799  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:15.877556  401365 type.go:168] "Request Body" body=""
	I1210 06:33:15.877630  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.878034  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:16.378366  401365 type.go:168] "Request Body" body=""
	I1210 06:33:16.378435  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.378773  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:16.878569  401365 type.go:168] "Request Body" body=""
	I1210 06:33:16.878643  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.879012  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:16.879074  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:17.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:33:17.377747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.378122  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:17.877820  401365 type.go:168] "Request Body" body=""
	I1210 06:33:17.877897  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.878175  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:18.377654  401365 type.go:168] "Request Body" body=""
	I1210 06:33:18.377727  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.378073  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:18.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:33:18.877754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.878137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:19.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:33:19.377682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.377977  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:19.378029  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:19.877848  401365 type.go:168] "Request Body" body=""
	I1210 06:33:19.877930  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.878248  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:20.378064  401365 type.go:168] "Request Body" body=""
	I1210 06:33:20.378150  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.378561  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:20.878476  401365 type.go:168] "Request Body" body=""
	I1210 06:33:20.878552  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.878835  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:21.377583  401365 type.go:168] "Request Body" body=""
	I1210 06:33:21.377658  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.378029  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:21.378094  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:21.877680  401365 type.go:168] "Request Body" body=""
	I1210 06:33:21.877755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.878122  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:22.378420  401365 type.go:168] "Request Body" body=""
	I1210 06:33:22.378487  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.378808  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:22.877547  401365 type.go:168] "Request Body" body=""
	I1210 06:33:22.877625  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.877980  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:23.377731  401365 type.go:168] "Request Body" body=""
	I1210 06:33:23.377812  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.378164  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:23.378221  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:23.877756  401365 type.go:168] "Request Body" body=""
	I1210 06:33:23.877825  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.878080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:24.377759  401365 type.go:168] "Request Body" body=""
	I1210 06:33:24.377846  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.378207  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:24.878036  401365 type.go:168] "Request Body" body=""
	I1210 06:33:24.878119  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.878474  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:25.378280  401365 type.go:168] "Request Body" body=""
	I1210 06:33:25.378375  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.378683  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:25.378744  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:25.878089  401365 type.go:168] "Request Body" body=""
	I1210 06:33:25.878190  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.878571  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:26.378247  401365 type.go:168] "Request Body" body=""
	I1210 06:33:26.378325  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.378653  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:26.878389  401365 type.go:168] "Request Body" body=""
	I1210 06:33:26.878457  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.878720  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:27.378526  401365 type.go:168] "Request Body" body=""
	I1210 06:33:27.378607  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.378943  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:27.379002  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:27.877690  401365 type.go:168] "Request Body" body=""
	I1210 06:33:27.877775  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.878121  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:28.377561  401365 type.go:168] "Request Body" body=""
	I1210 06:33:28.377635  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.377959  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:28.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:33:28.877750  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.878089  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.378437  401365 type.go:168] "Request Body" body=""
	I1210 06:33:29.378518  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.378867  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:29.877685  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.877995  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:29.878058  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:30.377631  401365 type.go:168] "Request Body" body=""
	I1210 06:33:30.377707  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.378042  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:30.877750  401365 type.go:168] "Request Body" body=""
	I1210 06:33:30.877827  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.878132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:31.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:33:31.377692  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.377951  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:31.877635  401365 type.go:168] "Request Body" body=""
	I1210 06:33:31.877717  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.878049  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:31.878116  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:32.377678  401365 type.go:168] "Request Body" body=""
	I1210 06:33:32.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.378103  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:32.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:32.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.877995  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:33.377756  401365 type.go:168] "Request Body" body=""
	I1210 06:33:33.377840  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.378198  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:33.877915  401365 type.go:168] "Request Body" body=""
	I1210 06:33:33.877993  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.878332  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:33.878392  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:34.377635  401365 type.go:168] "Request Body" body=""
	I1210 06:33:34.377727  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.378085  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:34.878096  401365 type.go:168] "Request Body" body=""
	I1210 06:33:34.878177  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.878550  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:35.378207  401365 type.go:168] "Request Body" body=""
	I1210 06:33:35.378280  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.378622  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:35.878407  401365 type.go:168] "Request Body" body=""
	I1210 06:33:35.878481  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.878777  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:35.878833  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:36.378544  401365 type.go:168] "Request Body" body=""
	I1210 06:33:36.378618  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.378979  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:36.877667  401365 type.go:168] "Request Body" body=""
	I1210 06:33:36.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.878064  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.377603  401365 type.go:168] "Request Body" body=""
	I1210 06:33:37.377674  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.377946  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.877690  401365 type.go:168] "Request Body" body=""
	I1210 06:33:37.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.878181  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:38.377888  401365 type.go:168] "Request Body" body=""
	I1210 06:33:38.377973  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.378298  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:38.378347  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:38.877651  401365 type.go:168] "Request Body" body=""
	I1210 06:33:38.877756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.878041  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:33:39.377755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.378094  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.877930  401365 type.go:168] "Request Body" body=""
	I1210 06:33:39.878008  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.878344  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:40.378300  401365 type.go:168] "Request Body" body=""
	I1210 06:33:40.378366  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.378615  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:40.378657  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:40.878469  401365 type.go:168] "Request Body" body=""
	I1210 06:33:40.878546  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.878897  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:41.378609  401365 type.go:168] "Request Body" body=""
	I1210 06:33:41.378684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.379020  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:41.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:41.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.878041  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:42.377673  401365 type.go:168] "Request Body" body=""
	I1210 06:33:42.377747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.378116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:42.877854  401365 type.go:168] "Request Body" body=""
	I1210 06:33:42.877940  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.878285  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:42.878351  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:43.377678  401365 type.go:168] "Request Body" body=""
	I1210 06:33:43.377746  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.378043  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:43.877664  401365 type.go:168] "Request Body" body=""
	I1210 06:33:43.877741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.878068  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:44.377646  401365 type.go:168] "Request Body" body=""
	I1210 06:33:44.377729  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.378109  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:44.877931  401365 type.go:168] "Request Body" body=""
	I1210 06:33:44.878000  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.878273  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:45.377683  401365 type.go:168] "Request Body" body=""
	I1210 06:33:45.377768  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.378162  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:45.378230  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:45.877646  401365 type.go:168] "Request Body" body=""
	I1210 06:33:45.877726  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.878079  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:46.378365  401365 type.go:168] "Request Body" body=""
	I1210 06:33:46.378443  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.378778  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:46.878592  401365 type.go:168] "Request Body" body=""
	I1210 06:33:46.878667  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.879016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.377612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:47.377693  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.378037  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.878404  401365 type.go:168] "Request Body" body=""
	I1210 06:33:47.878481  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.878791  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:47.878833  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:48.378603  401365 type.go:168] "Request Body" body=""
	I1210 06:33:48.378679  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.379038  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:48.877634  401365 type.go:168] "Request Body" body=""
	I1210 06:33:48.877710  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.878058  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:49.377585  401365 type.go:168] "Request Body" body=""
	I1210 06:33:49.377661  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.377929  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:49.877952  401365 type.go:168] "Request Body" body=""
	I1210 06:33:49.878030  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.878370  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:50.378433  401365 type.go:168] "Request Body" body=""
	I1210 06:33:50.378512  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.378851  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:50.378908  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:50.878409  401365 type.go:168] "Request Body" body=""
	I1210 06:33:50.878484  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.878745  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:51.378528  401365 type.go:168] "Request Body" body=""
	I1210 06:33:51.378608  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.378930  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:51.877680  401365 type.go:168] "Request Body" body=""
	I1210 06:33:51.877772  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.878121  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:33:52.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.378042  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.877736  401365 type.go:168] "Request Body" body=""
	I1210 06:33:52.877859  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.878200  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:52.878263  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:53.377750  401365 type.go:168] "Request Body" body=""
	I1210 06:33:53.377829  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.378164  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:53.878375  401365 type.go:168] "Request Body" body=""
	I1210 06:33:53.878447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.878711  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.378552  401365 type.go:168] "Request Body" body=""
	I1210 06:33:54.378627  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.378978  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.877937  401365 type.go:168] "Request Body" body=""
	I1210 06:33:54.878016  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.878372  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:54.878426  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:55.377557  401365 type.go:168] "Request Body" body=""
	I1210 06:33:55.377627  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.377890  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:55.877581  401365 type.go:168] "Request Body" body=""
	I1210 06:33:55.877661  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.878044  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:56.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:33:56.377687  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.378031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:56.878382  401365 type.go:168] "Request Body" body=""
	I1210 06:33:56.878463  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.878747  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:56.878792  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:57.378563  401365 type.go:168] "Request Body" body=""
	I1210 06:33:57.378655  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.379048  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:57.878429  401365 type.go:168] "Request Body" body=""
	I1210 06:33:57.878505  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.878838  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:58.378383  401365 type.go:168] "Request Body" body=""
	I1210 06:33:58.378457  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.378729  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:58.878537  401365 type.go:168] "Request Body" body=""
	I1210 06:33:58.878615  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.878961  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:58.879020  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:59.377698  401365 type.go:168] "Request Body" body=""
	I1210 06:33:59.377777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.378091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:59.877943  401365 type.go:168] "Request Body" body=""
	I1210 06:33:59.878015  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.878285  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:00.388459  401365 type.go:168] "Request Body" body=""
	I1210 06:34:00.388551  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.388936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:00.877633  401365 type.go:168] "Request Body" body=""
	I1210 06:34:00.877711  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.878072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:01.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:34:01.377692  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.377964  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:01.378006  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:01.877703  401365 type.go:168] "Request Body" body=""
	I1210 06:34:01.877777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.878116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.377805  401365 type.go:168] "Request Body" body=""
	I1210 06:34:02.377886  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.378243  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.877793  401365 type.go:168] "Request Body" body=""
	I1210 06:34:02.877861  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.878116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:03.377724  401365 type.go:168] "Request Body" body=""
	I1210 06:34:03.377819  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.378176  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:03.378248  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:03.877926  401365 type.go:168] "Request Body" body=""
	I1210 06:34:03.877998  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.878340  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.378166  401365 type.go:168] "Request Body" body=""
	I1210 06:34:04.378243  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.378539  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.878398  401365 type.go:168] "Request Body" body=""
	I1210 06:34:04.878479  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.878809  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:05.378551  401365 type.go:168] "Request Body" body=""
	I1210 06:34:05.378630  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.379127  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:05.379181  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:05.877595  401365 type.go:168] "Request Body" body=""
	I1210 06:34:05.877669  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.877928  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.377667  401365 type.go:168] "Request Body" body=""
	I1210 06:34:06.377742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.378094  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.877685  401365 type.go:168] "Request Body" body=""
	I1210 06:34:06.877764  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.878112  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.378383  401365 type.go:168] "Request Body" body=""
	I1210 06:34:07.378451  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.378722  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.878478  401365 type.go:168] "Request Body" body=""
	I1210 06:34:07.878553  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.878918  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:07.878972  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:08.378592  401365 type.go:168] "Request Body" body=""
	I1210 06:34:08.378675  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.379031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:08.877609  401365 type.go:168] "Request Body" body=""
	I1210 06:34:08.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.877968  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.377659  401365 type.go:168] "Request Body" body=""
	I1210 06:34:09.377734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.378072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.877922  401365 type.go:168] "Request Body" body=""
	I1210 06:34:09.878005  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.878441  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:10.378514  401365 type.go:168] "Request Body" body=""
	I1210 06:34:10.378590  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.378890  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:10.378934  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:10.877619  401365 type.go:168] "Request Body" body=""
	I1210 06:34:10.877709  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.878026  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:11.377616  401365 type.go:168] "Request Body" body=""
	I1210 06:34:11.377679  401365 node_ready.go:38] duration metric: took 6m0.000247895s for node "functional-253997" to be "Ready" ...
	I1210 06:34:11.380832  401365 out.go:203] 
	W1210 06:34:11.383623  401365 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 06:34:11.383641  401365 out.go:285] * 
	W1210 06:34:11.385783  401365 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:34:11.388549  401365 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.146353278Z" level=info msg="Using the internal default seccomp profile"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.146361803Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.146367686Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.146373898Z" level=info msg="RDT not available in the host system"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.146390497Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.147142292Z" level=info msg="Conmon does support the --sync option"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.147171528Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.147189308Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.147877119Z" level=info msg="Conmon does support the --sync option"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.147897649Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.148036112Z" level=info msg="Updated default CNI network name to "
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.148588463Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.14893442Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.148991167Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198308631Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198345202Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198393637Z" level=info msg="Create NRI interface"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198494881Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198502668Z" level=info msg="runtime interface created"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198513819Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198519891Z" level=info msg="runtime interface starting up..."
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198525897Z" level=info msg="starting plugins..."
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198538911Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198604963Z" level=info msg="No systemd watchdog enabled"
	Dec 10 06:28:08 functional-253997 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:34:13.352082    9223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:13.352650    9223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:13.354566    9223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:13.355079    9223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:13.356602    9223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 03:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015165] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514331] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032961] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807794] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.310854] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 04:51] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000001f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000bd7dad86{9P.session} n=00000000db0bc116
	[  +0.001142] FS-Cache: O-key=[10] '34323936323939323530'
	[  +0.000773] FS-Cache: N-cookie c=00000020 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000bd7dad86{9P.session} n=00000000040fb2e3
	[  +0.001079] FS-Cache: N-key=[10] '34323936323939323530'
	[Dec10 04:56] hrtimer: interrupt took 8286122 ns
	[Dec10 06:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 06:10] overlayfs: idmapped layers are currently not supported
	[ +21.611451] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 06:15] overlayfs: idmapped layers are currently not supported
	[Dec10 06:16] overlayfs: idmapped layers are currently not supported
	[Dec10 06:19] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:34:13 up  3:16,  0 user,  load average: 0.15, 0.26, 0.81
	Linux functional-253997 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:34:11 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:11 functional-253997 kubelet[9112]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:11 functional-253997 kubelet[9112]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:11 functional-253997 kubelet[9112]: E1210 06:34:11.174427    9112 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:34:11 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:34:11 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:34:11 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1139.
	Dec 10 06:34:11 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:11 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:11 functional-253997 kubelet[9118]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:11 functional-253997 kubelet[9118]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:11 functional-253997 kubelet[9118]: E1210 06:34:11.940494    9118 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:34:11 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:34:11 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:34:12 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 10 06:34:12 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:12 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:12 functional-253997 kubelet[9139]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:12 functional-253997 kubelet[9139]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:12 functional-253997 kubelet[9139]: E1210 06:34:12.693869    9139 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:34:12 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:34:12 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:34:13 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 10 06:34:13 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:13 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997: exit status 2 (403.114858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-253997" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (369.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (2.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-253997 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-253997 get po -A: exit status 1 (66.225207ms)

                                                
                                                
** stderr ** 
	E1210 06:34:14.587893  405406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:34:14.589447  405406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:34:14.590938  405406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:34:14.592399  405406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:34:14.593880  405406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-253997 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1210 06:34:14.587893  405406 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1210 06:34:14.589447  405406 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1210 06:34:14.590938  405406 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1210 06:34:14.592399  405406 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1210 06:34:14.593880  405406 memc
ache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nThe connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-253997 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-253997 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-253997
helpers_test.go:244: (dbg) docker inspect functional-253997:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	        "Created": "2025-12-10T06:19:33.832297734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 395175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:19:33.914699563Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hosts",
	        "LogPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7-json.log",
	        "Name": "/functional-253997",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-253997:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-253997",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	                "LowerDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-253997",
	                "Source": "/var/lib/docker/volumes/functional-253997/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-253997",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-253997",
	                "name.minikube.sigs.k8s.io": "functional-253997",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "484c42edbd556e17e2c5a836571d3275bc9cbd157b70c100e08a5b73322a62e6",
	            "SandboxKey": "/var/run/docker/netns/484c42edbd55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-253997": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ba:52:c5:6e:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fc5aadc020d5981cfdab05726de722ae81d653cbc30b66014568069657370fe7",
	                    "EndpointID": "9ecd4882ea5c9fe5581b68ad15a0a2f450e34777aee426fcc158008c81076e5a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-253997",
	                        "256e059cdf1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997: exit status 2 (330.563993ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-253997 logs -n 25: (1.064804713s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-013831 ssh sudo cat /usr/share/ca-certificates/364265.pem                                                                            │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                        │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /etc/ssl/certs/3642652.pem                                                                                       │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /usr/share/ca-certificates/3642652.pem                                                                           │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /etc/test/nested/copy/364265/hosts                                                                               │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                        │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ cp             │ functional-013831 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                              │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh -n functional-013831 sudo cat /home/docker/cp-test.txt                                                                    │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ cp             │ functional-013831 cp functional-013831:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1440438441/001/cp-test.txt                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh -n functional-013831 sudo cat /home/docker/cp-test.txt                                                                    │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ cp             │ functional-013831 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                       │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls --format short --alsologtostderr                                                                                     │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls --format yaml --alsologtostderr                                                                                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh -n functional-013831 sudo cat /tmp/does/not/exist/cp-test.txt                                                             │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ ssh            │ functional-013831 ssh pgrep buildkitd                                                                                                           │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ image          │ functional-013831 image ls --format json --alsologtostderr                                                                                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image build -t localhost/my-image:functional-013831 testdata/build --alsologtostderr                                          │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls --format table --alsologtostderr                                                                                     │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls                                                                                                                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ delete         │ -p functional-013831                                                                                                                            │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start          │ -p functional-253997 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ start          │ -p functional-253997 --alsologtostderr -v=8                                                                                                     │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:28 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:28:04
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:28:04.696682  401365 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:28:04.696859  401365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:28:04.696892  401365 out.go:374] Setting ErrFile to fd 2...
	I1210 06:28:04.696914  401365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:28:04.697215  401365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:28:04.697662  401365 out.go:368] Setting JSON to false
	I1210 06:28:04.698567  401365 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11437,"bootTime":1765336648,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:28:04.698673  401365 start.go:143] virtualization:  
	I1210 06:28:04.702443  401365 out.go:179] * [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:28:04.705481  401365 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:28:04.705615  401365 notify.go:221] Checking for updates...
	I1210 06:28:04.711086  401365 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:28:04.713917  401365 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:04.716867  401365 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:28:04.719925  401365 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:28:04.722835  401365 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:28:04.726336  401365 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:28:04.726469  401365 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:28:04.754166  401365 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:28:04.754279  401365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:28:04.810645  401365 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:28:04.801435563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:28:04.810756  401365 docker.go:319] overlay module found
	I1210 06:28:04.813864  401365 out.go:179] * Using the docker driver based on existing profile
	I1210 06:28:04.816769  401365 start.go:309] selected driver: docker
	I1210 06:28:04.816791  401365 start.go:927] validating driver "docker" against &{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:28:04.816907  401365 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:28:04.817028  401365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:28:04.870143  401365 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:28:04.860525891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:28:04.870593  401365 cni.go:84] Creating CNI manager for ""
	I1210 06:28:04.870644  401365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:28:04.870692  401365 start.go:353] cluster config:
	{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:28:04.873854  401365 out.go:179] * Starting "functional-253997" primary control-plane node in "functional-253997" cluster
	I1210 06:28:04.876935  401365 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:28:04.879860  401365 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:28:04.882747  401365 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:28:04.882931  401365 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:28:04.906679  401365 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:28:04.906698  401365 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:28:04.939349  401365 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 06:28:05.106989  401365 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 06:28:05.107216  401365 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/config.json ...
	I1210 06:28:05.107505  401365 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:28:05.107566  401365 start.go:360] acquireMachinesLock for functional-253997: {Name:mkd4a204596bf14d7530a6c5c103756527acdf26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.107643  401365 start.go:364] duration metric: took 39.278µs to acquireMachinesLock for "functional-253997"
	I1210 06:28:05.107681  401365 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:28:05.107701  401365 fix.go:54] fixHost starting: 
	I1210 06:28:05.107821  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:05.108032  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:05.134635  401365 fix.go:112] recreateIfNeeded on functional-253997: state=Running err=<nil>
	W1210 06:28:05.134664  401365 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:28:05.138161  401365 out.go:252] * Updating the running docker "functional-253997" container ...
	I1210 06:28:05.138204  401365 machine.go:94] provisionDockerMachine start ...
	I1210 06:28:05.138290  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.156912  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:05.157271  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:05.157282  401365 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:28:05.272681  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:05.312543  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:28:05.312568  401365 ubuntu.go:182] provisioning hostname "functional-253997"
	I1210 06:28:05.312643  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.337102  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:05.337416  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:05.337433  401365 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-253997 && echo "functional-253997" | sudo tee /etc/hostname
	I1210 06:28:05.435781  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:05.503700  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:28:05.503808  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.525010  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:05.525371  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:05.525395  401365 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-253997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-253997/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-253997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:28:05.596990  401365 cache.go:107] acquiring lock: {Name:mk9268471d41d450748d4b0133d1a043378776f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597093  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:28:05.597107  401365 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 135.879µs
	I1210 06:28:05.597123  401365 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597148  401365 cache.go:107] acquiring lock: {Name:mk8250dc655f821bf2674ed77a7683798ede4f4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597196  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:28:05.597205  401365 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 71.098µs
	I1210 06:28:05.597212  401365 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597224  401365 cache.go:107] acquiring lock: {Name:mk1c0334e51772e4f6b3429cc05b91ec19d54b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597256  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:28:05.597264  401365 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 41.773µs
	I1210 06:28:05.597271  401365 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597286  401365 cache.go:107] acquiring lock: {Name:mk41aaba55a3296a937cf380d44dc9920df69546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597313  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:28:05.597325  401365 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 45.342µs
	I1210 06:28:05.597331  401365 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597347  401365 cache.go:107] acquiring lock: {Name:mkc2831727d74e5fadb1c00bd89f0236b385200e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597380  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:28:05.597390  401365 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 49.009µs
	I1210 06:28:05.597395  401365 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:28:05.597404  401365 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597432  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:28:05.597441  401365 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 38.597µs
	I1210 06:28:05.597447  401365 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:28:05.597457  401365 cache.go:107] acquiring lock: {Name:mk7e052900e41419dc78d3311f27db508a8f64b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597487  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:28:05.597494  401365 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.163µs
	I1210 06:28:05.597499  401365 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:28:05.597517  401365 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597571  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:28:05.597584  401365 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.023µs
	I1210 06:28:05.597591  401365 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:28:05.597598  401365 cache.go:87] Successfully saved all images to host disk.
	I1210 06:28:05.681682  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:28:05.681708  401365 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-362392/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-362392/.minikube}
	I1210 06:28:05.681741  401365 ubuntu.go:190] setting up certificates
	I1210 06:28:05.681752  401365 provision.go:84] configureAuth start
	I1210 06:28:05.681819  401365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:28:05.699808  401365 provision.go:143] copyHostCerts
	I1210 06:28:05.699863  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 06:28:05.699905  401365 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem, removing ...
	I1210 06:28:05.699919  401365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 06:28:05.699992  401365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem (1078 bytes)
	I1210 06:28:05.700081  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 06:28:05.700104  401365 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem, removing ...
	I1210 06:28:05.700113  401365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 06:28:05.700142  401365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem (1123 bytes)
	I1210 06:28:05.700188  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 06:28:05.700207  401365 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem, removing ...
	I1210 06:28:05.700218  401365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 06:28:05.700242  401365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem (1675 bytes)
	I1210 06:28:05.700300  401365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem org=jenkins.functional-253997 san=[127.0.0.1 192.168.49.2 functional-253997 localhost minikube]
	I1210 06:28:05.936274  401365 provision.go:177] copyRemoteCerts
	I1210 06:28:05.936350  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:28:05.936418  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.954560  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.065031  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 06:28:06.065092  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:28:06.082556  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 06:28:06.082620  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:28:06.101057  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 06:28:06.101135  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:28:06.119676  401365 provision.go:87] duration metric: took 437.892883ms to configureAuth
	I1210 06:28:06.119777  401365 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:28:06.119980  401365 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:28:06.120085  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.137920  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:06.138235  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:06.138256  401365 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:28:06.452845  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:28:06.452929  401365 machine.go:97] duration metric: took 1.314715304s to provisionDockerMachine
	I1210 06:28:06.452956  401365 start.go:293] postStartSetup for "functional-253997" (driver="docker")
	I1210 06:28:06.452990  401365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:28:06.453063  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:28:06.453144  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.470784  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.577269  401365 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:28:06.580692  401365 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 06:28:06.580715  401365 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 06:28:06.580720  401365 command_runner.go:130] > VERSION_ID="12"
	I1210 06:28:06.580725  401365 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 06:28:06.580730  401365 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 06:28:06.580768  401365 command_runner.go:130] > ID=debian
	I1210 06:28:06.580780  401365 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 06:28:06.580785  401365 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 06:28:06.580791  401365 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 06:28:06.580887  401365 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:28:06.580933  401365 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:28:06.580952  401365 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/addons for local assets ...
	I1210 06:28:06.581012  401365 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/files for local assets ...
	I1210 06:28:06.581098  401365 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> 3642652.pem in /etc/ssl/certs
	I1210 06:28:06.581111  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> /etc/ssl/certs/3642652.pem
	I1210 06:28:06.581203  401365 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts -> hosts in /etc/test/nested/copy/364265
	I1210 06:28:06.581211  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts -> /etc/test/nested/copy/364265/hosts
	I1210 06:28:06.581307  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364265
	I1210 06:28:06.588834  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:28:06.607350  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts --> /etc/test/nested/copy/364265/hosts (40 bytes)
	I1210 06:28:06.625111  401365 start.go:296] duration metric: took 172.118023ms for postStartSetup
	I1210 06:28:06.625251  401365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:28:06.625310  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.643314  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.746089  401365 command_runner.go:130] > 11%
	I1210 06:28:06.746641  401365 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:28:06.751190  401365 command_runner.go:130] > 174G
	I1210 06:28:06.751596  401365 fix.go:56] duration metric: took 1.643890859s for fixHost
	I1210 06:28:06.751620  401365 start.go:83] releasing machines lock for "functional-253997", held for 1.643948944s
	I1210 06:28:06.751695  401365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:28:06.769599  401365 ssh_runner.go:195] Run: cat /version.json
	I1210 06:28:06.769653  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.769923  401365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:28:06.769973  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.794205  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.801527  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.995023  401365 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 06:28:06.995129  401365 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1210 06:28:06.995269  401365 ssh_runner.go:195] Run: systemctl --version
	I1210 06:28:07.001581  401365 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 06:28:07.001629  401365 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 06:28:07.002099  401365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:28:07.048284  401365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 06:28:07.052994  401365 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 06:28:07.053661  401365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:28:07.053769  401365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:28:07.062754  401365 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:28:07.062818  401365 start.go:496] detecting cgroup driver to use...
	I1210 06:28:07.062869  401365 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:28:07.062946  401365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:28:07.079107  401365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:28:07.094803  401365 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:28:07.094958  401365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:28:07.114470  401365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:28:07.128193  401365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:28:07.258424  401365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:28:07.374265  401365 docker.go:234] disabling docker service ...
	I1210 06:28:07.374339  401365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:28:07.389285  401365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:28:07.403201  401365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:28:07.521904  401365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:28:07.641023  401365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:28:07.653771  401365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:28:07.666535  401365 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1210 06:28:07.667719  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:07.817082  401365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:28:07.817158  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.826426  401365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:28:07.826509  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.835611  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.844530  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.853511  401365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:28:07.861378  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.870726  401365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.879012  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.888039  401365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:28:07.894740  401365 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 06:28:07.895767  401365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:28:07.903878  401365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:28:08.028500  401365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:28:08.203883  401365 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:28:08.204004  401365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:28:08.207826  401365 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1210 06:28:08.207850  401365 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 06:28:08.207858  401365 command_runner.go:130] > Device: 0,72	Inode: 1753        Links: 1
	I1210 06:28:08.207864  401365 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:28:08.207869  401365 command_runner.go:130] > Access: 2025-12-10 06:28:08.143329752 +0000
	I1210 06:28:08.207875  401365 command_runner.go:130] > Modify: 2025-12-10 06:28:08.143329752 +0000
	I1210 06:28:08.207879  401365 command_runner.go:130] > Change: 2025-12-10 06:28:08.143329752 +0000
	I1210 06:28:08.207883  401365 command_runner.go:130] >  Birth: -
	I1210 06:28:08.207920  401365 start.go:564] Will wait 60s for crictl version
	I1210 06:28:08.207972  401365 ssh_runner.go:195] Run: which crictl
	I1210 06:28:08.211603  401365 command_runner.go:130] > /usr/local/bin/crictl
	I1210 06:28:08.211673  401365 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:28:08.233344  401365 command_runner.go:130] > Version:  0.1.0
	I1210 06:28:08.233366  401365 command_runner.go:130] > RuntimeName:  cri-o
	I1210 06:28:08.233371  401365 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1210 06:28:08.233486  401365 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 06:28:08.235784  401365 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:28:08.235868  401365 ssh_runner.go:195] Run: crio --version
	I1210 06:28:08.263554  401365 command_runner.go:130] > crio version 1.34.3
	I1210 06:28:08.263582  401365 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 06:28:08.263590  401365 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 06:28:08.263598  401365 command_runner.go:130] >    GitTreeState:   dirty
	I1210 06:28:08.263603  401365 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 06:28:08.263609  401365 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 06:28:08.263614  401365 command_runner.go:130] >    Compiler:       gc
	I1210 06:28:08.263618  401365 command_runner.go:130] >    Platform:       linux/arm64
	I1210 06:28:08.263625  401365 command_runner.go:130] >    Linkmode:       static
	I1210 06:28:08.263631  401365 command_runner.go:130] >    BuildTags:
	I1210 06:28:08.263635  401365 command_runner.go:130] >      static
	I1210 06:28:08.263641  401365 command_runner.go:130] >      netgo
	I1210 06:28:08.263644  401365 command_runner.go:130] >      osusergo
	I1210 06:28:08.263649  401365 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 06:28:08.263658  401365 command_runner.go:130] >      seccomp
	I1210 06:28:08.263662  401365 command_runner.go:130] >      apparmor
	I1210 06:28:08.263665  401365 command_runner.go:130] >      selinux
	I1210 06:28:08.263673  401365 command_runner.go:130] >    LDFlags:          unknown
	I1210 06:28:08.263678  401365 command_runner.go:130] >    SeccompEnabled:   true
	I1210 06:28:08.263686  401365 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 06:28:08.265277  401365 ssh_runner.go:195] Run: crio --version
	I1210 06:28:08.292854  401365 command_runner.go:130] > crio version 1.34.3
	I1210 06:28:08.292877  401365 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 06:28:08.292884  401365 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 06:28:08.292894  401365 command_runner.go:130] >    GitTreeState:   dirty
	I1210 06:28:08.292899  401365 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 06:28:08.292903  401365 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 06:28:08.292909  401365 command_runner.go:130] >    Compiler:       gc
	I1210 06:28:08.292914  401365 command_runner.go:130] >    Platform:       linux/arm64
	I1210 06:28:08.292918  401365 command_runner.go:130] >    Linkmode:       static
	I1210 06:28:08.292921  401365 command_runner.go:130] >    BuildTags:
	I1210 06:28:08.292925  401365 command_runner.go:130] >      static
	I1210 06:28:08.292929  401365 command_runner.go:130] >      netgo
	I1210 06:28:08.292932  401365 command_runner.go:130] >      osusergo
	I1210 06:28:08.292936  401365 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 06:28:08.292939  401365 command_runner.go:130] >      seccomp
	I1210 06:28:08.292943  401365 command_runner.go:130] >      apparmor
	I1210 06:28:08.292947  401365 command_runner.go:130] >      selinux
	I1210 06:28:08.292951  401365 command_runner.go:130] >    LDFlags:          unknown
	I1210 06:28:08.292955  401365 command_runner.go:130] >    SeccompEnabled:   true
	I1210 06:28:08.292959  401365 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 06:28:08.297960  401365 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 06:28:08.300955  401365 cli_runner.go:164] Run: docker network inspect functional-253997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:28:08.316701  401365 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:28:08.320890  401365 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 06:28:08.321107  401365 kubeadm.go:884] updating cluster {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:28:08.321383  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:08.467539  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:08.630219  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:08.778675  401365 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:28:08.778770  401365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:28:08.809702  401365 command_runner.go:130] > {
	I1210 06:28:08.809721  401365 command_runner.go:130] >   "images":  [
	I1210 06:28:08.809725  401365 command_runner.go:130] >     {
	I1210 06:28:08.809734  401365 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1210 06:28:08.809739  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809744  401365 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 06:28:08.809748  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809753  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809762  401365 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1210 06:28:08.809765  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809770  401365 command_runner.go:130] >       "size":  "29035622",
	I1210 06:28:08.809784  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.809789  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809792  401365 command_runner.go:130] >     },
	I1210 06:28:08.809795  401365 command_runner.go:130] >     {
	I1210 06:28:08.809802  401365 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 06:28:08.809806  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809812  401365 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 06:28:08.809815  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809819  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809827  401365 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1210 06:28:08.809830  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809834  401365 command_runner.go:130] >       "size":  "74488375",
	I1210 06:28:08.809839  401365 command_runner.go:130] >       "username":  "nonroot",
	I1210 06:28:08.809843  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809846  401365 command_runner.go:130] >     },
	I1210 06:28:08.809850  401365 command_runner.go:130] >     {
	I1210 06:28:08.809856  401365 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1210 06:28:08.809860  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809865  401365 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1210 06:28:08.809868  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809872  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809882  401365 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:c711b5adb8fed0c7e2a62a6c327fd0f8486f90b93c1ebd0ba1e790b930373aae"
	I1210 06:28:08.809885  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809889  401365 command_runner.go:130] >       "size":  "60849030",
	I1210 06:28:08.809893  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.809897  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.809900  401365 command_runner.go:130] >       },
	I1210 06:28:08.809904  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.809908  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809911  401365 command_runner.go:130] >     },
	I1210 06:28:08.809915  401365 command_runner.go:130] >     {
	I1210 06:28:08.809921  401365 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1210 06:28:08.809925  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809934  401365 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1210 06:28:08.809938  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809941  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809949  401365 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:112cff968330968b8d8cc75dd9f232b5b048067a2151629e56b80ff7af621b72"
	I1210 06:28:08.809954  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809958  401365 command_runner.go:130] >       "size":  "85012778",
	I1210 06:28:08.809961  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.809965  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.809968  401365 command_runner.go:130] >       },
	I1210 06:28:08.809973  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.809977  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809980  401365 command_runner.go:130] >     },
	I1210 06:28:08.809983  401365 command_runner.go:130] >     {
	I1210 06:28:08.809989  401365 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1210 06:28:08.809994  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809999  401365 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1210 06:28:08.810002  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810006  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810014  401365 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:89a8e28214a1c4b4631281e37feaf54299e7510d43c5445d4adc88575390f71e"
	I1210 06:28:08.810017  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810021  401365 command_runner.go:130] >       "size":  "72167568",
	I1210 06:28:08.810030  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.810035  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.810038  401365 command_runner.go:130] >       },
	I1210 06:28:08.810042  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810046  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.810049  401365 command_runner.go:130] >     },
	I1210 06:28:08.810052  401365 command_runner.go:130] >     {
	I1210 06:28:08.810058  401365 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1210 06:28:08.810062  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.810068  401365 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1210 06:28:08.810072  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810076  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810086  401365 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:01f8409e5c04cbb256f29dc118ff184a24b9cbb97c7f6d3bd462333b366566ca"
	I1210 06:28:08.810089  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810093  401365 command_runner.go:130] >       "size":  "74105636",
	I1210 06:28:08.810097  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810101  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.810104  401365 command_runner.go:130] >     },
	I1210 06:28:08.810107  401365 command_runner.go:130] >     {
	I1210 06:28:08.810114  401365 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1210 06:28:08.810117  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.810127  401365 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1210 06:28:08.810131  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810134  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810144  401365 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9a2a4c91f5fdaf1a3c5e839f425cc7461b9e24d5f0816c207638df25ade3eda9"
	I1210 06:28:08.810147  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810151  401365 command_runner.go:130] >       "size":  "49819792",
	I1210 06:28:08.810154  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.810158  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.810160  401365 command_runner.go:130] >       },
	I1210 06:28:08.810165  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810169  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.810172  401365 command_runner.go:130] >     },
	I1210 06:28:08.810175  401365 command_runner.go:130] >     {
	I1210 06:28:08.810181  401365 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 06:28:08.810185  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.810189  401365 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 06:28:08.810192  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810196  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810203  401365 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1210 06:28:08.810206  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810210  401365 command_runner.go:130] >       "size":  "517328",
	I1210 06:28:08.810213  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.810217  401365 command_runner.go:130] >         "value":  "65535"
	I1210 06:28:08.810220  401365 command_runner.go:130] >       },
	I1210 06:28:08.810228  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810232  401365 command_runner.go:130] >       "pinned":  true
	I1210 06:28:08.810234  401365 command_runner.go:130] >     }
	I1210 06:28:08.810237  401365 command_runner.go:130] >   ]
	I1210 06:28:08.810240  401365 command_runner.go:130] > }
	I1210 06:28:08.812152  401365 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:28:08.812177  401365 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:28:08.812185  401365 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1210 06:28:08.812284  401365 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-253997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:28:08.812367  401365 ssh_runner.go:195] Run: crio config
	I1210 06:28:08.860605  401365 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1210 06:28:08.860628  401365 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1210 06:28:08.860635  401365 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1210 06:28:08.860638  401365 command_runner.go:130] > #
	I1210 06:28:08.860654  401365 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1210 06:28:08.860661  401365 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1210 06:28:08.860668  401365 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1210 06:28:08.860677  401365 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1210 06:28:08.860680  401365 command_runner.go:130] > # reload'.
	I1210 06:28:08.860687  401365 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1210 06:28:08.860694  401365 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1210 06:28:08.860700  401365 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1210 06:28:08.860706  401365 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1210 06:28:08.860709  401365 command_runner.go:130] > [crio]
	I1210 06:28:08.860716  401365 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1210 06:28:08.860721  401365 command_runner.go:130] > # containers images, in this directory.
	I1210 06:28:08.860730  401365 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1210 06:28:08.860737  401365 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1210 06:28:08.860742  401365 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1210 06:28:08.860760  401365 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1210 06:28:08.860811  401365 command_runner.go:130] > # imagestore = ""
	I1210 06:28:08.860819  401365 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1210 06:28:08.860826  401365 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1210 06:28:08.860837  401365 command_runner.go:130] > # storage_driver = "overlay"
	I1210 06:28:08.860843  401365 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1210 06:28:08.860850  401365 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1210 06:28:08.860853  401365 command_runner.go:130] > # storage_option = [
	I1210 06:28:08.860857  401365 command_runner.go:130] > # ]
	I1210 06:28:08.860864  401365 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1210 06:28:08.860870  401365 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1210 06:28:08.860874  401365 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1210 06:28:08.860880  401365 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1210 06:28:08.860886  401365 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1210 06:28:08.860890  401365 command_runner.go:130] > # always happen on a node reboot
	I1210 06:28:08.860894  401365 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1210 06:28:08.860905  401365 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1210 06:28:08.860911  401365 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1210 06:28:08.860918  401365 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1210 06:28:08.860922  401365 command_runner.go:130] > # version_file_persist = ""
	I1210 06:28:08.860930  401365 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1210 06:28:08.860938  401365 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1210 06:28:08.860941  401365 command_runner.go:130] > # internal_wipe = true
	I1210 06:28:08.860950  401365 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1210 06:28:08.860955  401365 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1210 06:28:08.860959  401365 command_runner.go:130] > # internal_repair = true
	I1210 06:28:08.860964  401365 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1210 06:28:08.860971  401365 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1210 06:28:08.860976  401365 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1210 06:28:08.860981  401365 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1210 06:28:08.860987  401365 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1210 06:28:08.860991  401365 command_runner.go:130] > [crio.api]
	I1210 06:28:08.860997  401365 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1210 06:28:08.861001  401365 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1210 06:28:08.861006  401365 command_runner.go:130] > # IP address on which the stream server will listen.
	I1210 06:28:08.861010  401365 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1210 06:28:08.861017  401365 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1210 06:28:08.861026  401365 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1210 06:28:08.861030  401365 command_runner.go:130] > # stream_port = "0"
	I1210 06:28:08.861035  401365 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1210 06:28:08.861040  401365 command_runner.go:130] > # stream_enable_tls = false
	I1210 06:28:08.861046  401365 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1210 06:28:08.861050  401365 command_runner.go:130] > # stream_idle_timeout = ""
	I1210 06:28:08.861056  401365 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1210 06:28:08.861062  401365 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1210 06:28:08.861066  401365 command_runner.go:130] > # stream_tls_cert = ""
	I1210 06:28:08.861072  401365 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1210 06:28:08.861077  401365 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1210 06:28:08.861081  401365 command_runner.go:130] > # stream_tls_key = ""
	I1210 06:28:08.861087  401365 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1210 06:28:08.861093  401365 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1210 06:28:08.861097  401365 command_runner.go:130] > # automatically pick up the changes.
	I1210 06:28:08.861446  401365 command_runner.go:130] > # stream_tls_ca = ""
	I1210 06:28:08.861478  401365 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 06:28:08.861569  401365 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1210 06:28:08.861581  401365 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 06:28:08.861586  401365 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1210 06:28:08.861593  401365 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1210 06:28:08.861599  401365 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1210 06:28:08.861602  401365 command_runner.go:130] > [crio.runtime]
	I1210 06:28:08.861609  401365 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1210 06:28:08.861614  401365 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1210 06:28:08.861628  401365 command_runner.go:130] > # "nofile=1024:2048"
	I1210 06:28:08.861634  401365 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1210 06:28:08.861638  401365 command_runner.go:130] > # default_ulimits = [
	I1210 06:28:08.861653  401365 command_runner.go:130] > # ]
	I1210 06:28:08.861660  401365 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1210 06:28:08.861663  401365 command_runner.go:130] > # no_pivot = false
	I1210 06:28:08.861669  401365 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1210 06:28:08.861675  401365 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1210 06:28:08.861681  401365 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1210 06:28:08.861687  401365 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1210 06:28:08.861696  401365 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1210 06:28:08.861703  401365 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 06:28:08.861707  401365 command_runner.go:130] > # conmon = ""
	I1210 06:28:08.861711  401365 command_runner.go:130] > # Cgroup setting for conmon
	I1210 06:28:08.861718  401365 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1210 06:28:08.861722  401365 command_runner.go:130] > conmon_cgroup = "pod"
	I1210 06:28:08.861728  401365 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1210 06:28:08.861733  401365 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1210 06:28:08.861740  401365 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 06:28:08.861744  401365 command_runner.go:130] > # conmon_env = [
	I1210 06:28:08.861747  401365 command_runner.go:130] > # ]
	I1210 06:28:08.861753  401365 command_runner.go:130] > # Additional environment variables to set for all the
	I1210 06:28:08.861758  401365 command_runner.go:130] > # containers. These are overridden if set in the
	I1210 06:28:08.861764  401365 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1210 06:28:08.861768  401365 command_runner.go:130] > # default_env = [
	I1210 06:28:08.861771  401365 command_runner.go:130] > # ]
	I1210 06:28:08.861787  401365 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1210 06:28:08.861795  401365 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1210 06:28:08.861799  401365 command_runner.go:130] > # selinux = false
	I1210 06:28:08.861809  401365 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1210 06:28:08.861817  401365 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1210 06:28:08.861823  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862101  401365 command_runner.go:130] > # seccomp_profile = ""
	I1210 06:28:08.862113  401365 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1210 06:28:08.862119  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862201  401365 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1210 06:28:08.862211  401365 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1210 06:28:08.862225  401365 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1210 06:28:08.862232  401365 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1210 06:28:08.862239  401365 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1210 06:28:08.862244  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862248  401365 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1210 06:28:08.862254  401365 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1210 06:28:08.862259  401365 command_runner.go:130] > # the cgroup blockio controller.
	I1210 06:28:08.862263  401365 command_runner.go:130] > # blockio_config_file = ""
	I1210 06:28:08.862273  401365 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1210 06:28:08.862283  401365 command_runner.go:130] > # blockio parameters.
	I1210 06:28:08.862294  401365 command_runner.go:130] > # blockio_reload = false
	I1210 06:28:08.862301  401365 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1210 06:28:08.862304  401365 command_runner.go:130] > # irqbalance daemon.
	I1210 06:28:08.862310  401365 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1210 06:28:08.862316  401365 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1210 06:28:08.862323  401365 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1210 06:28:08.862330  401365 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1210 06:28:08.862336  401365 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1210 06:28:08.862342  401365 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1210 06:28:08.862347  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862351  401365 command_runner.go:130] > # rdt_config_file = ""
	I1210 06:28:08.862356  401365 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1210 06:28:08.862384  401365 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1210 06:28:08.862391  401365 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1210 06:28:08.862666  401365 command_runner.go:130] > # separate_pull_cgroup = ""
	I1210 06:28:08.862678  401365 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1210 06:28:08.862685  401365 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1210 06:28:08.862689  401365 command_runner.go:130] > # will be added.
	I1210 06:28:08.862693  401365 command_runner.go:130] > # default_capabilities = [
	I1210 06:28:08.862777  401365 command_runner.go:130] > # 	"CHOWN",
	I1210 06:28:08.862786  401365 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1210 06:28:08.862797  401365 command_runner.go:130] > # 	"FSETID",
	I1210 06:28:08.862802  401365 command_runner.go:130] > # 	"FOWNER",
	I1210 06:28:08.862806  401365 command_runner.go:130] > # 	"SETGID",
	I1210 06:28:08.862809  401365 command_runner.go:130] > # 	"SETUID",
	I1210 06:28:08.862838  401365 command_runner.go:130] > # 	"SETPCAP",
	I1210 06:28:08.862844  401365 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1210 06:28:08.862847  401365 command_runner.go:130] > # 	"KILL",
	I1210 06:28:08.862850  401365 command_runner.go:130] > # ]
	I1210 06:28:08.862858  401365 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1210 06:28:08.862865  401365 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1210 06:28:08.863095  401365 command_runner.go:130] > # add_inheritable_capabilities = false
	I1210 06:28:08.863106  401365 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1210 06:28:08.863112  401365 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 06:28:08.863116  401365 command_runner.go:130] > default_sysctls = [
	I1210 06:28:08.863203  401365 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1210 06:28:08.863243  401365 command_runner.go:130] > ]
	I1210 06:28:08.863252  401365 command_runner.go:130] > # List of devices on the host that a
	I1210 06:28:08.863259  401365 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1210 06:28:08.863263  401365 command_runner.go:130] > # allowed_devices = [
	I1210 06:28:08.863314  401365 command_runner.go:130] > # 	"/dev/fuse",
	I1210 06:28:08.863326  401365 command_runner.go:130] > # 	"/dev/net/tun",
	I1210 06:28:08.863333  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863338  401365 command_runner.go:130] > # List of additional devices. specified as
	I1210 06:28:08.863345  401365 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1210 06:28:08.863351  401365 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1210 06:28:08.863357  401365 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 06:28:08.863361  401365 command_runner.go:130] > # additional_devices = [
	I1210 06:28:08.863363  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863368  401365 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1210 06:28:08.863372  401365 command_runner.go:130] > # cdi_spec_dirs = [
	I1210 06:28:08.863376  401365 command_runner.go:130] > # 	"/etc/cdi",
	I1210 06:28:08.863379  401365 command_runner.go:130] > # 	"/var/run/cdi",
	I1210 06:28:08.863382  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863388  401365 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1210 06:28:08.863394  401365 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1210 06:28:08.863398  401365 command_runner.go:130] > # Defaults to false.
	I1210 06:28:08.863403  401365 command_runner.go:130] > # device_ownership_from_security_context = false
	I1210 06:28:08.863410  401365 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1210 06:28:08.863415  401365 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1210 06:28:08.863419  401365 command_runner.go:130] > # hooks_dir = [
	I1210 06:28:08.863604  401365 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1210 06:28:08.863612  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863618  401365 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1210 06:28:08.863625  401365 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1210 06:28:08.863630  401365 command_runner.go:130] > # its default mounts from the following two files:
	I1210 06:28:08.863633  401365 command_runner.go:130] > #
	I1210 06:28:08.863640  401365 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1210 06:28:08.863646  401365 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1210 06:28:08.863652  401365 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1210 06:28:08.863655  401365 command_runner.go:130] > #
	I1210 06:28:08.863661  401365 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1210 06:28:08.863676  401365 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1210 06:28:08.863683  401365 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1210 06:28:08.863687  401365 command_runner.go:130] > #      only add mounts it finds in this file.
	I1210 06:28:08.863690  401365 command_runner.go:130] > #
	I1210 06:28:08.863719  401365 command_runner.go:130] > # default_mounts_file = ""
	I1210 06:28:08.863725  401365 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1210 06:28:08.863732  401365 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1210 06:28:08.863736  401365 command_runner.go:130] > # pids_limit = -1
	I1210 06:28:08.863742  401365 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1210 06:28:08.863748  401365 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1210 06:28:08.863761  401365 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1210 06:28:08.863771  401365 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1210 06:28:08.863775  401365 command_runner.go:130] > # log_size_max = -1
	I1210 06:28:08.863782  401365 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1210 06:28:08.863786  401365 command_runner.go:130] > # log_to_journald = false
	I1210 06:28:08.863792  401365 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1210 06:28:08.863974  401365 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1210 06:28:08.863984  401365 command_runner.go:130] > # Path to directory for container attach sockets.
	I1210 06:28:08.863990  401365 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1210 06:28:08.863996  401365 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1210 06:28:08.864082  401365 command_runner.go:130] > # bind_mount_prefix = ""
	I1210 06:28:08.864098  401365 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1210 06:28:08.864139  401365 command_runner.go:130] > # read_only = false
	I1210 06:28:08.864149  401365 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1210 06:28:08.864156  401365 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1210 06:28:08.864159  401365 command_runner.go:130] > # live configuration reload.
	I1210 06:28:08.864163  401365 command_runner.go:130] > # log_level = "info"
	I1210 06:28:08.864169  401365 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1210 06:28:08.864174  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.864178  401365 command_runner.go:130] > # log_filter = ""
	I1210 06:28:08.864183  401365 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1210 06:28:08.864190  401365 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1210 06:28:08.864193  401365 command_runner.go:130] > # separated by comma.
	I1210 06:28:08.864208  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864211  401365 command_runner.go:130] > # uid_mappings = ""
	I1210 06:28:08.864218  401365 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1210 06:28:08.864224  401365 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1210 06:28:08.864228  401365 command_runner.go:130] > # separated by comma.
	I1210 06:28:08.864236  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864440  401365 command_runner.go:130] > # gid_mappings = ""
	I1210 06:28:08.864451  401365 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1210 06:28:08.864458  401365 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 06:28:08.864465  401365 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 06:28:08.864473  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864477  401365 command_runner.go:130] > # minimum_mappable_uid = -1
	I1210 06:28:08.864483  401365 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1210 06:28:08.864493  401365 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 06:28:08.864501  401365 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 06:28:08.864514  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864541  401365 command_runner.go:130] > # minimum_mappable_gid = -1
	I1210 06:28:08.864548  401365 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1210 06:28:08.864555  401365 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1210 06:28:08.864560  401365 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1210 06:28:08.864572  401365 command_runner.go:130] > # ctr_stop_timeout = 30
	I1210 06:28:08.864578  401365 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1210 06:28:08.864588  401365 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1210 06:28:08.864593  401365 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1210 06:28:08.864598  401365 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1210 06:28:08.864602  401365 command_runner.go:130] > # drop_infra_ctr = true
	I1210 06:28:08.864608  401365 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1210 06:28:08.864613  401365 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1210 06:28:08.864621  401365 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1210 06:28:08.864625  401365 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1210 06:28:08.864632  401365 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1210 06:28:08.864638  401365 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1210 06:28:08.864644  401365 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1210 06:28:08.864649  401365 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1210 06:28:08.864653  401365 command_runner.go:130] > # shared_cpuset = ""
	I1210 06:28:08.864659  401365 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1210 06:28:08.864664  401365 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1210 06:28:08.864668  401365 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1210 06:28:08.864675  401365 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1210 06:28:08.864858  401365 command_runner.go:130] > # pinns_path = ""
	I1210 06:28:08.864869  401365 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1210 06:28:08.864876  401365 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1210 06:28:08.864881  401365 command_runner.go:130] > # enable_criu_support = true
	I1210 06:28:08.864886  401365 command_runner.go:130] > # Enable/disable the generation of the container,
	I1210 06:28:08.864892  401365 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1210 06:28:08.864935  401365 command_runner.go:130] > # enable_pod_events = false
	I1210 06:28:08.864946  401365 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1210 06:28:08.864960  401365 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1210 06:28:08.865092  401365 command_runner.go:130] > # default_runtime = "crun"
	I1210 06:28:08.865104  401365 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1210 06:28:08.865112  401365 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1210 06:28:08.865122  401365 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1210 06:28:08.865127  401365 command_runner.go:130] > # creation as a file is not desired either.
	I1210 06:28:08.865136  401365 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1210 06:28:08.865141  401365 command_runner.go:130] > # the hostname is being managed dynamically.
	I1210 06:28:08.865146  401365 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1210 06:28:08.865148  401365 command_runner.go:130] > # ]
	I1210 06:28:08.865158  401365 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1210 06:28:08.865165  401365 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1210 06:28:08.865171  401365 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1210 06:28:08.865177  401365 command_runner.go:130] > # Each entry in the table should follow the format:
	I1210 06:28:08.865179  401365 command_runner.go:130] > #
	I1210 06:28:08.865200  401365 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1210 06:28:08.865207  401365 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1210 06:28:08.865210  401365 command_runner.go:130] > # runtime_type = "oci"
	I1210 06:28:08.865215  401365 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1210 06:28:08.865219  401365 command_runner.go:130] > # inherit_default_runtime = false
	I1210 06:28:08.865224  401365 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1210 06:28:08.865229  401365 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1210 06:28:08.865233  401365 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1210 06:28:08.865236  401365 command_runner.go:130] > # monitor_env = []
	I1210 06:28:08.865241  401365 command_runner.go:130] > # privileged_without_host_devices = false
	I1210 06:28:08.865245  401365 command_runner.go:130] > # allowed_annotations = []
	I1210 06:28:08.865250  401365 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1210 06:28:08.865253  401365 command_runner.go:130] > # no_sync_log = false
	I1210 06:28:08.865257  401365 command_runner.go:130] > # default_annotations = {}
	I1210 06:28:08.865261  401365 command_runner.go:130] > # stream_websockets = false
	I1210 06:28:08.865265  401365 command_runner.go:130] > # seccomp_profile = ""
	I1210 06:28:08.865296  401365 command_runner.go:130] > # Where:
	I1210 06:28:08.865301  401365 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1210 06:28:08.865308  401365 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1210 06:28:08.865314  401365 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1210 06:28:08.865320  401365 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1210 06:28:08.865323  401365 command_runner.go:130] > #   in $PATH.
	I1210 06:28:08.865330  401365 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1210 06:28:08.865334  401365 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1210 06:28:08.865341  401365 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1210 06:28:08.865344  401365 command_runner.go:130] > #   state.
	I1210 06:28:08.865352  401365 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1210 06:28:08.865360  401365 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1210 06:28:08.865368  401365 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1210 06:28:08.865376  401365 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1210 06:28:08.865381  401365 command_runner.go:130] > #   the values from the default runtime on load time.
	I1210 06:28:08.865387  401365 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1210 06:28:08.865392  401365 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1210 06:28:08.865399  401365 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1210 06:28:08.865406  401365 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1210 06:28:08.865411  401365 command_runner.go:130] > #   The currently recognized values are:
	I1210 06:28:08.865417  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1210 06:28:08.865425  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1210 06:28:08.865431  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1210 06:28:08.865437  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1210 06:28:08.865444  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1210 06:28:08.865451  401365 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1210 06:28:08.865458  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1210 06:28:08.865464  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1210 06:28:08.865470  401365 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1210 06:28:08.865492  401365 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1210 06:28:08.865501  401365 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1210 06:28:08.865507  401365 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1210 06:28:08.865513  401365 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1210 06:28:08.865519  401365 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1210 06:28:08.865525  401365 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1210 06:28:08.865533  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1210 06:28:08.865539  401365 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1210 06:28:08.865552  401365 command_runner.go:130] > #   deprecated option "conmon".
	I1210 06:28:08.865560  401365 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1210 06:28:08.865565  401365 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1210 06:28:08.865572  401365 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1210 06:28:08.865578  401365 command_runner.go:130] > #   should be moved to the container's cgroup
	I1210 06:28:08.865587  401365 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1210 06:28:08.865592  401365 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1210 06:28:08.865599  401365 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1210 06:28:08.865607  401365 command_runner.go:130] > #   conmon-rs by using:
	I1210 06:28:08.865615  401365 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1210 06:28:08.865622  401365 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1210 06:28:08.865630  401365 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1210 06:28:08.865636  401365 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1210 06:28:08.865642  401365 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1210 06:28:08.865649  401365 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1210 06:28:08.865657  401365 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1210 06:28:08.865661  401365 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1210 06:28:08.865669  401365 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1210 06:28:08.865677  401365 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1210 06:28:08.865685  401365 command_runner.go:130] > #   when a machine crash happens.
	I1210 06:28:08.865693  401365 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1210 06:28:08.865700  401365 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1210 06:28:08.865708  401365 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1210 06:28:08.865713  401365 command_runner.go:130] > #   seccomp profile for the runtime.
	I1210 06:28:08.865719  401365 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1210 06:28:08.865744  401365 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1210 06:28:08.865747  401365 command_runner.go:130] > #
	I1210 06:28:08.865751  401365 command_runner.go:130] > # Using the seccomp notifier feature:
	I1210 06:28:08.865754  401365 command_runner.go:130] > #
	I1210 06:28:08.865762  401365 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1210 06:28:08.865768  401365 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1210 06:28:08.865771  401365 command_runner.go:130] > #
	I1210 06:28:08.865777  401365 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1210 06:28:08.865783  401365 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1210 06:28:08.865785  401365 command_runner.go:130] > #
	I1210 06:28:08.865793  401365 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1210 06:28:08.865797  401365 command_runner.go:130] > # feature.
	I1210 06:28:08.865800  401365 command_runner.go:130] > #
	I1210 06:28:08.865807  401365 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1210 06:28:08.865813  401365 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1210 06:28:08.865819  401365 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1210 06:28:08.865832  401365 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1210 06:28:08.865838  401365 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1210 06:28:08.865841  401365 command_runner.go:130] > #
	I1210 06:28:08.865847  401365 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1210 06:28:08.865853  401365 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1210 06:28:08.865856  401365 command_runner.go:130] > #
	I1210 06:28:08.865862  401365 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1210 06:28:08.865870  401365 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1210 06:28:08.865873  401365 command_runner.go:130] > #
	I1210 06:28:08.865880  401365 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1210 06:28:08.865885  401365 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1210 06:28:08.865889  401365 command_runner.go:130] > # limitation.
	I1210 06:28:08.865905  401365 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1210 06:28:08.866331  401365 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1210 06:28:08.866426  401365 command_runner.go:130] > runtime_type = ""
	I1210 06:28:08.866446  401365 command_runner.go:130] > runtime_root = "/run/crun"
	I1210 06:28:08.866464  401365 command_runner.go:130] > inherit_default_runtime = false
	I1210 06:28:08.866497  401365 command_runner.go:130] > runtime_config_path = ""
	I1210 06:28:08.866524  401365 command_runner.go:130] > container_min_memory = ""
	I1210 06:28:08.866577  401365 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 06:28:08.866606  401365 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 06:28:08.866632  401365 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 06:28:08.866654  401365 command_runner.go:130] > allowed_annotations = [
	I1210 06:28:08.866675  401365 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1210 06:28:08.866694  401365 command_runner.go:130] > ]
	I1210 06:28:08.866728  401365 command_runner.go:130] > privileged_without_host_devices = false
	I1210 06:28:08.866748  401365 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1210 06:28:08.866769  401365 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1210 06:28:08.866790  401365 command_runner.go:130] > runtime_type = ""
	I1210 06:28:08.866821  401365 command_runner.go:130] > runtime_root = "/run/runc"
	I1210 06:28:08.866840  401365 command_runner.go:130] > inherit_default_runtime = false
	I1210 06:28:08.866860  401365 command_runner.go:130] > runtime_config_path = ""
	I1210 06:28:08.866880  401365 command_runner.go:130] > container_min_memory = ""
	I1210 06:28:08.866908  401365 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 06:28:08.866932  401365 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 06:28:08.866953  401365 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 06:28:08.866974  401365 command_runner.go:130] > privileged_without_host_devices = false
	I1210 06:28:08.867007  401365 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1210 06:28:08.867043  401365 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1210 06:28:08.867068  401365 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1210 06:28:08.867104  401365 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1210 06:28:08.867134  401365 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1210 06:28:08.867162  401365 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1210 06:28:08.867185  401365 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1210 06:28:08.867213  401365 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1210 06:28:08.867246  401365 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1210 06:28:08.867272  401365 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1210 06:28:08.867293  401365 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1210 06:28:08.867324  401365 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1210 06:28:08.867347  401365 command_runner.go:130] > # Example:
	I1210 06:28:08.867368  401365 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1210 06:28:08.867390  401365 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1210 06:28:08.867422  401365 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1210 06:28:08.867444  401365 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1210 06:28:08.867461  401365 command_runner.go:130] > # cpuset = "0-1"
	I1210 06:28:08.867481  401365 command_runner.go:130] > # cpushares = "5"
	I1210 06:28:08.867501  401365 command_runner.go:130] > # cpuquota = "1000"
	I1210 06:28:08.867527  401365 command_runner.go:130] > # cpuperiod = "100000"
	I1210 06:28:08.867550  401365 command_runner.go:130] > # cpulimit = "35"
	I1210 06:28:08.867570  401365 command_runner.go:130] > # Where:
	I1210 06:28:08.867591  401365 command_runner.go:130] > # The workload name is workload-type.
	I1210 06:28:08.867625  401365 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1210 06:28:08.867647  401365 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1210 06:28:08.867667  401365 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1210 06:28:08.867691  401365 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1210 06:28:08.867724  401365 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1210 06:28:08.867747  401365 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1210 06:28:08.867767  401365 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1210 06:28:08.867786  401365 command_runner.go:130] > # Default value is set to true
	I1210 06:28:08.867808  401365 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1210 06:28:08.867842  401365 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1210 06:28:08.867862  401365 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1210 06:28:08.867882  401365 command_runner.go:130] > # Default value is set to 'false'
	I1210 06:28:08.867915  401365 command_runner.go:130] > # disable_hostport_mapping = false
	I1210 06:28:08.867942  401365 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1210 06:28:08.867964  401365 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1210 06:28:08.867982  401365 command_runner.go:130] > # timezone = ""
	I1210 06:28:08.868015  401365 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1210 06:28:08.868041  401365 command_runner.go:130] > #
	I1210 06:28:08.868060  401365 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1210 06:28:08.868081  401365 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1210 06:28:08.868110  401365 command_runner.go:130] > [crio.image]
	I1210 06:28:08.868133  401365 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1210 06:28:08.868150  401365 command_runner.go:130] > # default_transport = "docker://"
	I1210 06:28:08.868170  401365 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1210 06:28:08.868192  401365 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1210 06:28:08.868219  401365 command_runner.go:130] > # global_auth_file = ""
	I1210 06:28:08.868243  401365 command_runner.go:130] > # The image used to instantiate infra containers.
	I1210 06:28:08.868264  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.868284  401365 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1210 06:28:08.868317  401365 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1210 06:28:08.868338  401365 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1210 06:28:08.868357  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.868374  401365 command_runner.go:130] > # pause_image_auth_file = ""
	I1210 06:28:08.868396  401365 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1210 06:28:08.868423  401365 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1210 06:28:08.868450  401365 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1210 06:28:08.868474  401365 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1210 06:28:08.868753  401365 command_runner.go:130] > # pause_command = "/pause"
	I1210 06:28:08.868765  401365 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1210 06:28:08.868772  401365 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1210 06:28:08.868778  401365 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1210 06:28:08.868784  401365 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1210 06:28:08.868791  401365 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1210 06:28:08.868797  401365 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1210 06:28:08.868802  401365 command_runner.go:130] > # pinned_images = [
	I1210 06:28:08.868834  401365 command_runner.go:130] > # ]
	I1210 06:28:08.868841  401365 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1210 06:28:08.868848  401365 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1210 06:28:08.868855  401365 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1210 06:28:08.868864  401365 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1210 06:28:08.868877  401365 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1210 06:28:08.868892  401365 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1210 06:28:08.868897  401365 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1210 06:28:08.868904  401365 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1210 06:28:08.868911  401365 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1210 06:28:08.868917  401365 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1210 06:28:08.868924  401365 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1210 06:28:08.868928  401365 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1210 06:28:08.868935  401365 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1210 06:28:08.868941  401365 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1210 06:28:08.868945  401365 command_runner.go:130] > # changing them here.
	I1210 06:28:08.868950  401365 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1210 06:28:08.868954  401365 command_runner.go:130] > # insecure_registries = [
	I1210 06:28:08.868957  401365 command_runner.go:130] > # ]
	I1210 06:28:08.868964  401365 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1210 06:28:08.868968  401365 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1210 06:28:08.868972  401365 command_runner.go:130] > # image_volumes = "mkdir"
	I1210 06:28:08.868978  401365 command_runner.go:130] > # Temporary directory to use for storing big files
	I1210 06:28:08.868982  401365 command_runner.go:130] > # big_files_temporary_dir = ""
	I1210 06:28:08.868988  401365 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1210 06:28:08.868995  401365 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1210 06:28:08.868999  401365 command_runner.go:130] > # auto_reload_registries = false
	I1210 06:28:08.869006  401365 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1210 06:28:08.869014  401365 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1210 06:28:08.869022  401365 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1210 06:28:08.869027  401365 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1210 06:28:08.869031  401365 command_runner.go:130] > # The mode of short name resolution.
	I1210 06:28:08.869039  401365 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1210 06:28:08.869047  401365 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1210 06:28:08.869051  401365 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1210 06:28:08.869055  401365 command_runner.go:130] > # short_name_mode = "enforcing"
	I1210 06:28:08.869061  401365 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1210 06:28:08.869067  401365 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1210 06:28:08.869299  401365 command_runner.go:130] > # oci_artifact_mount_support = true
	I1210 06:28:08.869316  401365 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1210 06:28:08.869329  401365 command_runner.go:130] > # CNI plugins.
	I1210 06:28:08.869333  401365 command_runner.go:130] > [crio.network]
	I1210 06:28:08.869340  401365 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1210 06:28:08.869346  401365 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1210 06:28:08.869485  401365 command_runner.go:130] > # cni_default_network = ""
	I1210 06:28:08.869502  401365 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1210 06:28:08.869709  401365 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1210 06:28:08.869721  401365 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1210 06:28:08.869725  401365 command_runner.go:130] > # plugin_dirs = [
	I1210 06:28:08.869729  401365 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1210 06:28:08.869732  401365 command_runner.go:130] > # ]
	I1210 06:28:08.869736  401365 command_runner.go:130] > # List of included pod metrics.
	I1210 06:28:08.869740  401365 command_runner.go:130] > # included_pod_metrics = [
	I1210 06:28:08.869743  401365 command_runner.go:130] > # ]
	I1210 06:28:08.869749  401365 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1210 06:28:08.869752  401365 command_runner.go:130] > [crio.metrics]
	I1210 06:28:08.869757  401365 command_runner.go:130] > # Globally enable or disable metrics support.
	I1210 06:28:08.869763  401365 command_runner.go:130] > # enable_metrics = false
	I1210 06:28:08.869767  401365 command_runner.go:130] > # Specify enabled metrics collectors.
	I1210 06:28:08.869772  401365 command_runner.go:130] > # Per default all metrics are enabled.
	I1210 06:28:08.869778  401365 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1210 06:28:08.869785  401365 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1210 06:28:08.869791  401365 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1210 06:28:08.869796  401365 command_runner.go:130] > # metrics_collectors = [
	I1210 06:28:08.869800  401365 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1210 06:28:08.869805  401365 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1210 06:28:08.869809  401365 command_runner.go:130] > # 	"containers_oom_total",
	I1210 06:28:08.869813  401365 command_runner.go:130] > # 	"processes_defunct",
	I1210 06:28:08.869817  401365 command_runner.go:130] > # 	"operations_total",
	I1210 06:28:08.869821  401365 command_runner.go:130] > # 	"operations_latency_seconds",
	I1210 06:28:08.869826  401365 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1210 06:28:08.869830  401365 command_runner.go:130] > # 	"operations_errors_total",
	I1210 06:28:08.869834  401365 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1210 06:28:08.869839  401365 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1210 06:28:08.869843  401365 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1210 06:28:08.869851  401365 command_runner.go:130] > # 	"image_pulls_success_total",
	I1210 06:28:08.869855  401365 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1210 06:28:08.869860  401365 command_runner.go:130] > # 	"containers_oom_count_total",
	I1210 06:28:08.869865  401365 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1210 06:28:08.869873  401365 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1210 06:28:08.869878  401365 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1210 06:28:08.869881  401365 command_runner.go:130] > # ]
	I1210 06:28:08.869887  401365 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1210 06:28:08.869891  401365 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1210 06:28:08.869896  401365 command_runner.go:130] > # The port on which the metrics server will listen.
	I1210 06:28:08.869901  401365 command_runner.go:130] > # metrics_port = 9090
	I1210 06:28:08.869906  401365 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1210 06:28:08.869910  401365 command_runner.go:130] > # metrics_socket = ""
	I1210 06:28:08.869915  401365 command_runner.go:130] > # The certificate for the secure metrics server.
	I1210 06:28:08.869921  401365 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1210 06:28:08.869928  401365 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1210 06:28:08.869934  401365 command_runner.go:130] > # certificate on any modification event.
	I1210 06:28:08.869938  401365 command_runner.go:130] > # metrics_cert = ""
	I1210 06:28:08.869943  401365 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1210 06:28:08.869948  401365 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1210 06:28:08.869963  401365 command_runner.go:130] > # metrics_key = ""
	I1210 06:28:08.869970  401365 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1210 06:28:08.869973  401365 command_runner.go:130] > [crio.tracing]
	I1210 06:28:08.869978  401365 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1210 06:28:08.869982  401365 command_runner.go:130] > # enable_tracing = false
	I1210 06:28:08.869987  401365 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1210 06:28:08.869992  401365 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1210 06:28:08.869999  401365 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1210 06:28:08.870003  401365 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1210 06:28:08.870007  401365 command_runner.go:130] > # CRI-O NRI configuration.
	I1210 06:28:08.870010  401365 command_runner.go:130] > [crio.nri]
	I1210 06:28:08.870014  401365 command_runner.go:130] > # Globally enable or disable NRI.
	I1210 06:28:08.870018  401365 command_runner.go:130] > # enable_nri = true
	I1210 06:28:08.870022  401365 command_runner.go:130] > # NRI socket to listen on.
	I1210 06:28:08.870026  401365 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1210 06:28:08.870031  401365 command_runner.go:130] > # NRI plugin directory to use.
	I1210 06:28:08.870035  401365 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1210 06:28:08.870044  401365 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1210 06:28:08.870049  401365 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1210 06:28:08.870054  401365 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1210 06:28:08.870120  401365 command_runner.go:130] > # nri_disable_connections = false
	I1210 06:28:08.870126  401365 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1210 06:28:08.870131  401365 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1210 06:28:08.870136  401365 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1210 06:28:08.870140  401365 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1210 06:28:08.870144  401365 command_runner.go:130] > # NRI default validator configuration.
	I1210 06:28:08.870151  401365 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1210 06:28:08.870158  401365 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1210 06:28:08.870166  401365 command_runner.go:130] > # can be restricted/rejected:
	I1210 06:28:08.870170  401365 command_runner.go:130] > # - OCI hook injection
	I1210 06:28:08.870176  401365 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1210 06:28:08.870182  401365 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1210 06:28:08.870187  401365 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1210 06:28:08.870192  401365 command_runner.go:130] > # - adjustment of linux namespaces
	I1210 06:28:08.870198  401365 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1210 06:28:08.870204  401365 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1210 06:28:08.870211  401365 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1210 06:28:08.870214  401365 command_runner.go:130] > #
	I1210 06:28:08.870219  401365 command_runner.go:130] > # [crio.nri.default_validator]
	I1210 06:28:08.870224  401365 command_runner.go:130] > # nri_enable_default_validator = false
	I1210 06:28:08.870229  401365 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1210 06:28:08.870235  401365 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1210 06:28:08.870240  401365 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1210 06:28:08.870245  401365 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1210 06:28:08.870249  401365 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1210 06:28:08.870254  401365 command_runner.go:130] > # nri_validator_required_plugins = [
	I1210 06:28:08.870256  401365 command_runner.go:130] > # ]
	I1210 06:28:08.870261  401365 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1210 06:28:08.870267  401365 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1210 06:28:08.870270  401365 command_runner.go:130] > [crio.stats]
	I1210 06:28:08.870279  401365 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1210 06:28:08.870285  401365 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1210 06:28:08.870289  401365 command_runner.go:130] > # stats_collection_period = 0
	I1210 06:28:08.870295  401365 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1210 06:28:08.870301  401365 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1210 06:28:08.870309  401365 command_runner.go:130] > # collection_period = 0
	I1210 06:28:08.872234  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.838776003Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1210 06:28:08.872284  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.838812886Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1210 06:28:08.872309  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.838840094Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1210 06:28:08.872334  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.839193559Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1210 06:28:08.872381  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.839375723Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:08.872413  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.839707715Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1210 06:28:08.872441  401365 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1210 06:28:08.872553  401365 cni.go:84] Creating CNI manager for ""
	I1210 06:28:08.872583  401365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:28:08.872624  401365 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:28:08.872677  401365 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-253997 NodeName:functional-253997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:28:08.872842  401365 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-253997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:28:08.872963  401365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:28:08.882589  401365 command_runner.go:130] > kubeadm
	I1210 06:28:08.882664  401365 command_runner.go:130] > kubectl
	I1210 06:28:08.882683  401365 command_runner.go:130] > kubelet
	I1210 06:28:08.883772  401365 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:28:08.883860  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:28:08.894311  401365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 06:28:08.917477  401365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:28:08.933123  401365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1210 06:28:08.951215  401365 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:28:08.955022  401365 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 06:28:08.955137  401365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:28:09.068336  401365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:28:09.626369  401365 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997 for IP: 192.168.49.2
	I1210 06:28:09.626393  401365 certs.go:195] generating shared ca certs ...
	I1210 06:28:09.626411  401365 certs.go:227] acquiring lock for ca certs: {Name:mk6fdc2cadbd147112797261b2432f4e1e90b685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:09.626560  401365 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key
	I1210 06:28:09.626610  401365 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key
	I1210 06:28:09.626622  401365 certs.go:257] generating profile certs ...
	I1210 06:28:09.626723  401365 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key
	I1210 06:28:09.626797  401365 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key.d56e9423
	I1210 06:28:09.626842  401365 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key
	I1210 06:28:09.626855  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 06:28:09.626868  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 06:28:09.626879  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 06:28:09.626895  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 06:28:09.626917  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 06:28:09.626934  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 06:28:09.626951  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 06:28:09.626967  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 06:28:09.627018  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem (1338 bytes)
	W1210 06:28:09.627054  401365 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265_empty.pem, impossibly tiny 0 bytes
	I1210 06:28:09.627067  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:28:09.627098  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:28:09.627129  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:28:09.627160  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem (1675 bytes)
	I1210 06:28:09.627208  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:28:09.627243  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.627257  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem -> /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.627269  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> /usr/share/ca-certificates/3642652.pem
	I1210 06:28:09.627907  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:28:09.646839  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:28:09.665451  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:28:09.684144  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:28:09.703168  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:28:09.722766  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:28:09.740755  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:28:09.758979  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:28:09.777915  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:28:09.796193  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem --> /usr/share/ca-certificates/364265.pem (1338 bytes)
	I1210 06:28:09.814097  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /usr/share/ca-certificates/3642652.pem (1708 bytes)
	I1210 06:28:09.831978  401365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:28:09.845391  401365 ssh_runner.go:195] Run: openssl version
	I1210 06:28:09.851779  401365 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 06:28:09.852274  401365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.860146  401365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:28:09.868064  401365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.872198  401365 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.872310  401365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.872381  401365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.915298  401365 command_runner.go:130] > b5213941
	I1210 06:28:09.915776  401365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:28:09.923881  401365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.931564  401365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364265.pem /etc/ssl/certs/364265.pem
	I1210 06:28:09.939347  401365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.943515  401365 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.943602  401365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.943706  401365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.984596  401365 command_runner.go:130] > 51391683
	I1210 06:28:09.985095  401365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:28:09.992884  401365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.000682  401365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3642652.pem /etc/ssl/certs/3642652.pem
	I1210 06:28:10.009973  401365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.015475  401365 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.015546  401365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.015611  401365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.058412  401365 command_runner.go:130] > 3ec20f2e
	I1210 06:28:10.059028  401365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:28:10.067481  401365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:28:10.072097  401365 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:28:10.072141  401365 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 06:28:10.072148  401365 command_runner.go:130] > Device: 259,1	Inode: 3906312     Links: 1
	I1210 06:28:10.072155  401365 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:28:10.072162  401365 command_runner.go:130] > Access: 2025-12-10 06:24:00.744386425 +0000
	I1210 06:28:10.072185  401365 command_runner.go:130] > Modify: 2025-12-10 06:19:55.737291822 +0000
	I1210 06:28:10.072211  401365 command_runner.go:130] > Change: 2025-12-10 06:19:55.737291822 +0000
	I1210 06:28:10.072217  401365 command_runner.go:130] >  Birth: 2025-12-10 06:19:55.737291822 +0000
	I1210 06:28:10.072295  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:28:10.114065  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.114701  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:28:10.156441  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.157041  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:28:10.198547  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.198997  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:28:10.239473  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.239921  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:28:10.280741  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.281284  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:28:10.322073  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.322510  401365 kubeadm.go:401] StartCluster: {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:28:10.322592  401365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:28:10.322670  401365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:28:10.349813  401365 cri.go:89] found id: ""
	I1210 06:28:10.349915  401365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:28:10.357053  401365 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 06:28:10.357076  401365 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 06:28:10.357083  401365 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 06:28:10.358087  401365 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:28:10.358107  401365 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:28:10.358179  401365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:28:10.366355  401365 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:28:10.366773  401365 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-253997" does not appear in /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.366892  401365 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-362392/kubeconfig needs updating (will repair): [kubeconfig missing "functional-253997" cluster setting kubeconfig missing "functional-253997" context setting]
	I1210 06:28:10.367176  401365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/kubeconfig: {Name:mk64e356b1eb31ef8982d6b594101aff5f90a6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:10.367620  401365 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.367775  401365 kapi.go:59] client config for functional-253997: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key", CAFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:28:10.368328  401365 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:28:10.368348  401365 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:28:10.368357  401365 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:28:10.368361  401365 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:28:10.368366  401365 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:28:10.368683  401365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:28:10.368778  401365 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 06:28:10.376809  401365 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 06:28:10.376842  401365 kubeadm.go:602] duration metric: took 18.728652ms to restartPrimaryControlPlane
	I1210 06:28:10.376852  401365 kubeadm.go:403] duration metric: took 54.348915ms to StartCluster
	I1210 06:28:10.376867  401365 settings.go:142] acquiring lock: {Name:mk50f3096e55008942432e93c6cf4976a70e957e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:10.376930  401365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.377580  401365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/kubeconfig: {Name:mk64e356b1eb31ef8982d6b594101aff5f90a6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:10.377783  401365 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:28:10.378131  401365 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:28:10.378203  401365 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:28:10.378273  401365 addons.go:70] Setting storage-provisioner=true in profile "functional-253997"
	I1210 06:28:10.378288  401365 addons.go:239] Setting addon storage-provisioner=true in "functional-253997"
	I1210 06:28:10.378298  401365 addons.go:70] Setting default-storageclass=true in profile "functional-253997"
	I1210 06:28:10.378308  401365 host.go:66] Checking if "functional-253997" exists ...
	I1210 06:28:10.378325  401365 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-253997"
	I1210 06:28:10.378609  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:10.378772  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:10.382148  401365 out.go:179] * Verifying Kubernetes components...
	I1210 06:28:10.385829  401365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:28:10.411769  401365 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.411927  401365 kapi.go:59] client config for functional-253997: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key", CAFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:28:10.412189  401365 addons.go:239] Setting addon default-storageclass=true in "functional-253997"
	I1210 06:28:10.412217  401365 host.go:66] Checking if "functional-253997" exists ...
	I1210 06:28:10.412622  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:10.423310  401365 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:28:10.429289  401365 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:10.429319  401365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:28:10.429390  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:10.437508  401365 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:10.437529  401365 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:28:10.437602  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:10.484090  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:10.489523  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:10.601993  401365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:28:10.611397  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:10.637290  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:11.377346  401365 node_ready.go:35] waiting up to 6m0s for node "functional-253997" to be "Ready" ...
	I1210 06:28:11.377544  401365 type.go:168] "Request Body" body=""
	I1210 06:28:11.377656  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.377728  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	W1210 06:28:11.377850  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.377894  401365 retry.go:31] will retry after 259.470683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.378104  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.378200  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.378242  401365 retry.go:31] will retry after 196.4073ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.378345  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:11.575829  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:11.638697  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:11.638779  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.638826  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.638871  401365 retry.go:31] will retry after 208.428392ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.692820  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.696338  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.696370  401365 retry.go:31] will retry after 282.781918ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.847619  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:11.878109  401365 type.go:168] "Request Body" body=""
	I1210 06:28:11.878199  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:11.878519  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:11.905645  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.908839  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.908880  401365 retry.go:31] will retry after 582.02813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.980121  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:12.039691  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:12.043135  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.043170  401365 retry.go:31] will retry after 432.314142ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.377666  401365 type.go:168] "Request Body" body=""
	I1210 06:28:12.377744  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:12.378081  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:12.476496  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:12.492099  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:12.562290  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:12.562336  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.562356  401365 retry.go:31] will retry after 1.009011504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.562409  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:12.562427  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.562433  401365 retry.go:31] will retry after 937.221861ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.877643  401365 type.go:168] "Request Body" body=""
	I1210 06:28:12.877787  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:12.878157  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:13.377677  401365 type.go:168] "Request Body" body=""
	I1210 06:28:13.377754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:13.378100  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:13.378160  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:13.500598  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:13.556443  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:13.560062  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.560116  401365 retry.go:31] will retry after 1.265541277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.572329  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:13.633856  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:13.637464  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.637509  401365 retry.go:31] will retry after 1.331173049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.877793  401365 type.go:168] "Request Body" body=""
	I1210 06:28:13.877888  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:13.878199  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:14.377638  401365 type.go:168] "Request Body" body=""
	I1210 06:28:14.377730  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:14.378082  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:14.825793  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:14.878190  401365 type.go:168] "Request Body" body=""
	I1210 06:28:14.878261  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:14.878521  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:14.884055  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:14.884152  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:14.884201  401365 retry.go:31] will retry after 1.396995132s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:14.969467  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:15.059973  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:15.064387  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:15.064489  401365 retry.go:31] will retry after 957.92161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:15.377700  401365 type.go:168] "Request Body" body=""
	I1210 06:28:15.377772  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:15.378126  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:15.378206  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:15.877555  401365 type.go:168] "Request Body" body=""
	I1210 06:28:15.877664  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:15.877987  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:16.023398  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:16.083212  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:16.083269  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.083288  401365 retry.go:31] will retry after 3.316582994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.281469  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:16.346229  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:16.346265  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.346285  401365 retry.go:31] will retry after 2.05295153s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.378603  401365 type.go:168] "Request Body" body=""
	I1210 06:28:16.378688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:16.379017  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:16.877615  401365 type.go:168] "Request Body" body=""
	I1210 06:28:16.877690  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:16.878019  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:17.377588  401365 type.go:168] "Request Body" body=""
	I1210 06:28:17.377663  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:17.377946  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:17.877651  401365 type.go:168] "Request Body" body=""
	I1210 06:28:17.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:17.878120  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:17.878201  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:18.377673  401365 type.go:168] "Request Body" body=""
	I1210 06:28:18.377749  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:18.378108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:18.400386  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:18.462469  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:18.462509  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:18.462528  401365 retry.go:31] will retry after 3.621738225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:18.877637  401365 type.go:168] "Request Body" body=""
	I1210 06:28:18.877719  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:18.878031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:19.377699  401365 type.go:168] "Request Body" body=""
	I1210 06:28:19.377775  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:19.378123  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:19.400389  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:19.462507  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:19.462542  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:19.462562  401365 retry.go:31] will retry after 6.347571238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:19.878220  401365 type.go:168] "Request Body" body=""
	I1210 06:28:19.878303  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:19.878573  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:19.878624  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:20.378571  401365 type.go:168] "Request Body" body=""
	I1210 06:28:20.378643  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:20.378957  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:20.877707  401365 type.go:168] "Request Body" body=""
	I1210 06:28:20.877781  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:20.878082  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:21.377732  401365 type.go:168] "Request Body" body=""
	I1210 06:28:21.377840  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:21.378217  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:21.877933  401365 type.go:168] "Request Body" body=""
	I1210 06:28:21.878005  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:21.878280  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:22.084823  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:22.150796  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:22.150852  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:22.150872  401365 retry.go:31] will retry after 8.518894464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:22.378239  401365 type.go:168] "Request Body" body=""
	I1210 06:28:22.378314  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:22.378638  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:22.378700  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:22.878392  401365 type.go:168] "Request Body" body=""
	I1210 06:28:22.878470  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:22.878811  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:23.378412  401365 type.go:168] "Request Body" body=""
	I1210 06:28:23.378493  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:23.378816  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:23.878580  401365 type.go:168] "Request Body" body=""
	I1210 06:28:23.878657  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:23.879035  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:24.377745  401365 type.go:168] "Request Body" body=""
	I1210 06:28:24.377829  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:24.378165  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:24.878042  401365 type.go:168] "Request Body" body=""
	I1210 06:28:24.878110  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:24.878379  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:24.878424  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:25.378073  401365 type.go:168] "Request Body" body=""
	I1210 06:28:25.378148  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:25.378492  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:25.811094  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:25.867131  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:25.870279  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:25.870312  401365 retry.go:31] will retry after 4.064346895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:25.878534  401365 type.go:168] "Request Body" body=""
	I1210 06:28:25.878610  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:25.878933  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:26.378357  401365 type.go:168] "Request Body" body=""
	I1210 06:28:26.378423  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:26.378728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:26.878539  401365 type.go:168] "Request Body" body=""
	I1210 06:28:26.878615  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:26.878903  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:26.878950  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:27.377638  401365 type.go:168] "Request Body" body=""
	I1210 06:28:27.377740  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:27.378052  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:27.878386  401365 type.go:168] "Request Body" body=""
	I1210 06:28:27.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:27.878757  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:28.378587  401365 type.go:168] "Request Body" body=""
	I1210 06:28:28.378670  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:28.379016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:28.877671  401365 type.go:168] "Request Body" body=""
	I1210 06:28:28.877745  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:28.878083  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:29.378412  401365 type.go:168] "Request Body" body=""
	I1210 06:28:29.378486  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:29.378756  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:29.378811  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:29.877691  401365 type.go:168] "Request Body" body=""
	I1210 06:28:29.877767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:29.878126  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:29.935383  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:29.993267  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:29.993316  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:29.993335  401365 retry.go:31] will retry after 13.293540925s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:30.377660  401365 type.go:168] "Request Body" body=""
	I1210 06:28:30.377733  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:30.378062  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:30.670723  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:30.731809  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:30.735358  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:30.735395  401365 retry.go:31] will retry after 6.439855049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:30.877636  401365 type.go:168] "Request Body" body=""
	I1210 06:28:30.877707  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:30.878037  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:31.377698  401365 type.go:168] "Request Body" body=""
	I1210 06:28:31.377773  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:31.378112  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:31.877691  401365 type.go:168] "Request Body" body=""
	I1210 06:28:31.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:31.878135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:31.878196  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:32.377829  401365 type.go:168] "Request Body" body=""
	I1210 06:28:32.377902  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:32.378167  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:32.877646  401365 type.go:168] "Request Body" body=""
	I1210 06:28:32.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:32.878081  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:33.377665  401365 type.go:168] "Request Body" body=""
	I1210 06:28:33.377750  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:33.378080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:33.878372  401365 type.go:168] "Request Body" body=""
	I1210 06:28:33.878447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:33.878725  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:33.878768  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:34.378621  401365 type.go:168] "Request Body" body=""
	I1210 06:28:34.378700  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:34.379046  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:34.877880  401365 type.go:168] "Request Body" body=""
	I1210 06:28:34.877952  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:34.878345  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:35.378044  401365 type.go:168] "Request Body" body=""
	I1210 06:28:35.378114  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:35.378389  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:35.878221  401365 type.go:168] "Request Body" body=""
	I1210 06:28:35.878303  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:35.878728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:35.878785  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:36.378584  401365 type.go:168] "Request Body" body=""
	I1210 06:28:36.378665  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:36.379008  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:36.878369  401365 type.go:168] "Request Body" body=""
	I1210 06:28:36.878444  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:36.878707  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:37.176405  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:37.232388  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:37.235885  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:37.235920  401365 retry.go:31] will retry after 10.78688793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:37.378207  401365 type.go:168] "Request Body" body=""
	I1210 06:28:37.378282  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:37.378581  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:37.878419  401365 type.go:168] "Request Body" body=""
	I1210 06:28:37.878495  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:37.878813  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:37.878863  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:38.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:28:38.378474  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:38.378754  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:38.878544  401365 type.go:168] "Request Body" body=""
	I1210 06:28:38.878622  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:38.878987  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:39.377713  401365 type.go:168] "Request Body" body=""
	I1210 06:28:39.377797  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:39.378129  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:39.878083  401365 type.go:168] "Request Body" body=""
	I1210 06:28:39.878150  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:39.878429  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:40.378442  401365 type.go:168] "Request Body" body=""
	I1210 06:28:40.378523  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:40.378856  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:40.378911  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:40.877583  401365 type.go:168] "Request Body" body=""
	I1210 06:28:40.877679  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:40.878034  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:41.378374  401365 type.go:168] "Request Body" body=""
	I1210 06:28:41.378447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:41.378715  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:41.878491  401365 type.go:168] "Request Body" body=""
	I1210 06:28:41.878566  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:41.878923  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:42.377671  401365 type.go:168] "Request Body" body=""
	I1210 06:28:42.377751  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:42.378141  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:42.877599  401365 type.go:168] "Request Body" body=""
	I1210 06:28:42.877683  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:42.877945  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:42.877984  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:43.287649  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:43.346928  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:43.346975  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:43.346995  401365 retry.go:31] will retry after 14.625741063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:43.378235  401365 type.go:168] "Request Body" body=""
	I1210 06:28:43.378315  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:43.378642  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:43.878431  401365 type.go:168] "Request Body" body=""
	I1210 06:28:43.878526  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:43.878848  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:44.378341  401365 type.go:168] "Request Body" body=""
	I1210 06:28:44.378412  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:44.378674  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:44.877586  401365 type.go:168] "Request Body" body=""
	I1210 06:28:44.877680  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:44.878028  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:44.878086  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:45.377798  401365 type.go:168] "Request Body" body=""
	I1210 06:28:45.377879  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:45.378245  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:45.878503  401365 type.go:168] "Request Body" body=""
	I1210 06:28:45.878572  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:45.878831  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:46.378595  401365 type.go:168] "Request Body" body=""
	I1210 06:28:46.378672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:46.378982  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:46.877682  401365 type.go:168] "Request Body" body=""
	I1210 06:28:46.877759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:46.878095  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:46.878155  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:47.377841  401365 type.go:168] "Request Body" body=""
	I1210 06:28:47.377917  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:47.378263  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:47.877992  401365 type.go:168] "Request Body" body=""
	I1210 06:28:47.878078  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:47.878438  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:48.023828  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:48.081536  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:48.084895  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:48.084933  401365 retry.go:31] will retry after 18.097374996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:48.378332  401365 type.go:168] "Request Body" body=""
	I1210 06:28:48.378422  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:48.378753  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:48.878419  401365 type.go:168] "Request Body" body=""
	I1210 06:28:48.878497  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:48.878762  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:48.878816  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:49.378574  401365 type.go:168] "Request Body" body=""
	I1210 06:28:49.378648  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:49.378945  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:49.877700  401365 type.go:168] "Request Body" body=""
	I1210 06:28:49.877800  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:49.878143  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:50.377920  401365 type.go:168] "Request Body" body=""
	I1210 06:28:50.377988  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:50.378294  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:50.877693  401365 type.go:168] "Request Body" body=""
	I1210 06:28:50.877785  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:50.878095  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:51.377686  401365 type.go:168] "Request Body" body=""
	I1210 06:28:51.377791  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:51.378134  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:51.378207  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:51.877781  401365 type.go:168] "Request Body" body=""
	I1210 06:28:51.877851  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:51.878166  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:52.377911  401365 type.go:168] "Request Body" body=""
	I1210 06:28:52.377995  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:52.378322  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:52.878024  401365 type.go:168] "Request Body" body=""
	I1210 06:28:52.878097  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:52.878439  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:53.377622  401365 type.go:168] "Request Body" body=""
	I1210 06:28:53.377713  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:53.378024  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:53.877755  401365 type.go:168] "Request Body" body=""
	I1210 06:28:53.877852  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:53.878190  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:53.878248  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:54.377697  401365 type.go:168] "Request Body" body=""
	I1210 06:28:54.377777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:54.378108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:54.877974  401365 type.go:168] "Request Body" body=""
	I1210 06:28:54.878043  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:54.878312  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:55.378006  401365 type.go:168] "Request Body" body=""
	I1210 06:28:55.378086  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:55.378481  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:55.878103  401365 type.go:168] "Request Body" body=""
	I1210 06:28:55.878195  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:55.878572  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:55.878630  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:56.378220  401365 type.go:168] "Request Body" body=""
	I1210 06:28:56.378297  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:56.378560  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:56.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:28:56.878464  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:56.878801  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:57.378592  401365 type.go:168] "Request Body" body=""
	I1210 06:28:57.378672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:57.379008  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:57.877621  401365 type.go:168] "Request Body" body=""
	I1210 06:28:57.877696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:57.878001  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:57.973321  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:58.030522  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:58.034296  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:58.034334  401365 retry.go:31] will retry after 29.63385811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:58.377818  401365 type.go:168] "Request Body" body=""
	I1210 06:28:58.377897  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:58.378240  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:58.378316  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:58.878004  401365 type.go:168] "Request Body" body=""
	I1210 06:28:58.878100  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:58.878429  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:59.378237  401365 type.go:168] "Request Body" body=""
	I1210 06:28:59.378307  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:59.378610  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:59.878397  401365 type.go:168] "Request Body" body=""
	I1210 06:28:59.878486  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:59.878865  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:00.377830  401365 type.go:168] "Request Body" body=""
	I1210 06:29:00.377911  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:00.378308  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:00.378388  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:00.877903  401365 type.go:168] "Request Body" body=""
	I1210 06:29:00.877979  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:00.878324  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:01.378045  401365 type.go:168] "Request Body" body=""
	I1210 06:29:01.378142  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:01.378492  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:01.878290  401365 type.go:168] "Request Body" body=""
	I1210 06:29:01.878364  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:01.878682  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:02.378481  401365 type.go:168] "Request Body" body=""
	I1210 06:29:02.378563  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:02.378938  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:02.379007  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:02.877673  401365 type.go:168] "Request Body" body=""
	I1210 06:29:02.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:02.878144  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:03.378397  401365 type.go:168] "Request Body" body=""
	I1210 06:29:03.378485  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:03.378752  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:03.878546  401365 type.go:168] "Request Body" body=""
	I1210 06:29:03.878622  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:03.878936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:04.377644  401365 type.go:168] "Request Body" body=""
	I1210 06:29:04.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:04.378092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:04.877915  401365 type.go:168] "Request Body" body=""
	I1210 06:29:04.877993  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:04.878265  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:04.878310  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:05.377970  401365 type.go:168] "Request Body" body=""
	I1210 06:29:05.378056  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:05.378385  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:05.877707  401365 type.go:168] "Request Body" body=""
	I1210 06:29:05.877783  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:05.878096  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:06.182558  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:29:06.240148  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:06.243928  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:29:06.243964  401365 retry.go:31] will retry after 43.852698404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:29:06.378184  401365 type.go:168] "Request Body" body=""
	I1210 06:29:06.378259  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:06.378534  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:06.878434  401365 type.go:168] "Request Body" body=""
	I1210 06:29:06.878516  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:06.878892  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:06.878963  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:07.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:29:07.377787  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:07.378114  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:07.878358  401365 type.go:168] "Request Body" body=""
	I1210 06:29:07.878442  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:07.878707  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:08.378589  401365 type.go:168] "Request Body" body=""
	I1210 06:29:08.378685  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:08.379006  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:08.877738  401365 type.go:168] "Request Body" body=""
	I1210 06:29:08.877836  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:08.878152  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:09.377599  401365 type.go:168] "Request Body" body=""
	I1210 06:29:09.377678  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:09.377964  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:09.378009  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:09.878613  401365 type.go:168] "Request Body" body=""
	I1210 06:29:09.878706  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:09.879055  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:10.377636  401365 type.go:168] "Request Body" body=""
	I1210 06:29:10.377729  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:10.378057  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:10.878414  401365 type.go:168] "Request Body" body=""
	I1210 06:29:10.878485  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:10.878853  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:11.377614  401365 type.go:168] "Request Body" body=""
	I1210 06:29:11.377691  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:11.378087  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:11.378157  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:11.877843  401365 type.go:168] "Request Body" body=""
	I1210 06:29:11.877917  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:11.878206  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:12.377859  401365 type.go:168] "Request Body" body=""
	I1210 06:29:12.377937  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:12.378284  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:12.877669  401365 type.go:168] "Request Body" body=""
	I1210 06:29:12.877752  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:12.878075  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:13.377664  401365 type.go:168] "Request Body" body=""
	I1210 06:29:13.377747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:13.378093  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:13.878416  401365 type.go:168] "Request Body" body=""
	I1210 06:29:13.878494  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:13.878809  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:13.878870  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:14.377568  401365 type.go:168] "Request Body" body=""
	I1210 06:29:14.377643  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:14.377998  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:14.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:29:14.877755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:14.878113  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:15.377598  401365 type.go:168] "Request Body" body=""
	I1210 06:29:15.377677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:15.377997  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:15.877667  401365 type.go:168] "Request Body" body=""
	I1210 06:29:15.877746  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:15.878091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:16.377666  401365 type.go:168] "Request Body" body=""
	I1210 06:29:16.377769  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:16.378076  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:16.378122  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:16.877619  401365 type.go:168] "Request Body" body=""
	I1210 06:29:16.877702  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:16.878021  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:17.377665  401365 type.go:168] "Request Body" body=""
	I1210 06:29:17.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:17.378090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:17.877642  401365 type.go:168] "Request Body" body=""
	I1210 06:29:17.877734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:17.878072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:18.378371  401365 type.go:168] "Request Body" body=""
	I1210 06:29:18.378462  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:18.378766  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:18.378828  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:18.878580  401365 type.go:168] "Request Body" body=""
	I1210 06:29:18.878658  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:18.879021  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:19.377663  401365 type.go:168] "Request Body" body=""
	I1210 06:29:19.377746  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:19.378079  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:19.877904  401365 type.go:168] "Request Body" body=""
	I1210 06:29:19.878012  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:19.878270  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:20.378288  401365 type.go:168] "Request Body" body=""
	I1210 06:29:20.378362  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:20.378707  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:20.878519  401365 type.go:168] "Request Body" body=""
	I1210 06:29:20.878594  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:20.878915  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:20.878972  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:21.377633  401365 type.go:168] "Request Body" body=""
	I1210 06:29:21.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:21.378102  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:21.877674  401365 type.go:168] "Request Body" body=""
	I1210 06:29:21.877777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:21.878104  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:22.377691  401365 type.go:168] "Request Body" body=""
	I1210 06:29:22.377786  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:22.378137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:22.877604  401365 type.go:168] "Request Body" body=""
	I1210 06:29:22.877691  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:22.877964  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:23.377675  401365 type.go:168] "Request Body" body=""
	I1210 06:29:23.377762  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:23.378116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:23.378205  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:23.877699  401365 type.go:168] "Request Body" body=""
	I1210 06:29:23.877817  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:23.878164  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:24.377849  401365 type.go:168] "Request Body" body=""
	I1210 06:29:24.377937  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:24.378276  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:24.878345  401365 type.go:168] "Request Body" body=""
	I1210 06:29:24.878419  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:24.878834  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:25.378514  401365 type.go:168] "Request Body" body=""
	I1210 06:29:25.378602  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:25.378940  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:25.378995  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:25.878340  401365 type.go:168] "Request Body" body=""
	I1210 06:29:25.878408  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:25.878688  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:26.378495  401365 type.go:168] "Request Body" body=""
	I1210 06:29:26.378583  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:26.378915  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:26.877643  401365 type.go:168] "Request Body" body=""
	I1210 06:29:26.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:26.878074  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:27.378388  401365 type.go:168] "Request Body" body=""
	I1210 06:29:27.378458  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:27.378728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:27.669323  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:29:27.726986  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:27.731088  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:27.731190  401365 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:29:27.878451  401365 type.go:168] "Request Body" body=""
	I1210 06:29:27.878523  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:27.878853  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:27.878910  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:28.378489  401365 type.go:168] "Request Body" body=""
	I1210 06:29:28.378564  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:28.378901  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:28.878380  401365 type.go:168] "Request Body" body=""
	I1210 06:29:28.878458  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:28.878719  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:29.378449  401365 type.go:168] "Request Body" body=""
	I1210 06:29:29.378529  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:29.378849  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:29.877584  401365 type.go:168] "Request Body" body=""
	I1210 06:29:29.877661  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:29.878000  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:30.377937  401365 type.go:168] "Request Body" body=""
	I1210 06:29:30.378012  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:30.378326  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:30.378387  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:30.877919  401365 type.go:168] "Request Body" body=""
	I1210 06:29:30.878019  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:30.878352  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:31.377915  401365 type.go:168] "Request Body" body=""
	I1210 06:29:31.378002  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:31.378351  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:31.878025  401365 type.go:168] "Request Body" body=""
	I1210 06:29:31.878128  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:31.878461  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:32.378235  401365 type.go:168] "Request Body" body=""
	I1210 06:29:32.378305  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:32.378637  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:32.378712  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:32.878497  401365 type.go:168] "Request Body" body=""
	I1210 06:29:32.878570  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:32.878897  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:33.378428  401365 type.go:168] "Request Body" body=""
	I1210 06:29:33.378500  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:33.378769  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:33.877562  401365 type.go:168] "Request Body" body=""
	I1210 06:29:33.877640  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:33.877963  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:34.377713  401365 type.go:168] "Request Body" body=""
	I1210 06:29:34.377821  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:34.378167  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:34.877924  401365 type.go:168] "Request Body" body=""
	I1210 06:29:34.877993  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:34.878306  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:34.878365  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:35.378234  401365 type.go:168] "Request Body" body=""
	I1210 06:29:35.378332  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:35.378669  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:35.878465  401365 type.go:168] "Request Body" body=""
	I1210 06:29:35.878539  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:35.878861  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:36.378415  401365 type.go:168] "Request Body" body=""
	I1210 06:29:36.378520  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:36.378846  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:36.877600  401365 type.go:168] "Request Body" body=""
	I1210 06:29:36.877689  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:36.878017  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:37.377713  401365 type.go:168] "Request Body" body=""
	I1210 06:29:37.377800  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:37.378154  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:37.378223  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:37.878379  401365 type.go:168] "Request Body" body=""
	I1210 06:29:37.878466  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:37.878806  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:38.378634  401365 type.go:168] "Request Body" body=""
	I1210 06:29:38.378721  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:38.379089  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:38.877647  401365 type.go:168] "Request Body" body=""
	I1210 06:29:38.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:38.878098  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:39.377834  401365 type.go:168] "Request Body" body=""
	I1210 06:29:39.377905  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:39.378160  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:39.878109  401365 type.go:168] "Request Body" body=""
	I1210 06:29:39.878184  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:39.878538  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:39.878595  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:40.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:29:40.378476  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:40.378793  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:40.878462  401365 type.go:168] "Request Body" body=""
	I1210 06:29:40.878582  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:40.878971  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:41.377652  401365 type.go:168] "Request Body" body=""
	I1210 06:29:41.377732  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:41.378135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:41.877884  401365 type.go:168] "Request Body" body=""
	I1210 06:29:41.877962  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:41.878325  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:42.377611  401365 type.go:168] "Request Body" body=""
	I1210 06:29:42.377693  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:42.378065  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:42.378123  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:42.877666  401365 type.go:168] "Request Body" body=""
	I1210 06:29:42.877738  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:42.878090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:43.377808  401365 type.go:168] "Request Body" body=""
	I1210 06:29:43.377882  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:43.378222  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:43.877625  401365 type.go:168] "Request Body" body=""
	I1210 06:29:43.877697  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:43.877990  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:44.377652  401365 type.go:168] "Request Body" body=""
	I1210 06:29:44.377728  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:44.378065  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:44.877919  401365 type.go:168] "Request Body" body=""
	I1210 06:29:44.878017  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:44.878351  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:44.878422  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:45.378184  401365 type.go:168] "Request Body" body=""
	I1210 06:29:45.378259  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:45.378530  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:45.878292  401365 type.go:168] "Request Body" body=""
	I1210 06:29:45.878369  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:45.878717  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:46.378381  401365 type.go:168] "Request Body" body=""
	I1210 06:29:46.378455  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:46.378778  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:46.878419  401365 type.go:168] "Request Body" body=""
	I1210 06:29:46.878504  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:46.878818  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:46.878868  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:47.377582  401365 type.go:168] "Request Body" body=""
	I1210 06:29:47.377662  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:47.378008  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:47.878425  401365 type.go:168] "Request Body" body=""
	I1210 06:29:47.878508  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:47.878839  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:48.378385  401365 type.go:168] "Request Body" body=""
	I1210 06:29:48.378477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:48.378769  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:48.878588  401365 type.go:168] "Request Body" body=""
	I1210 06:29:48.878669  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:48.878986  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:48.879047  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:49.377711  401365 type.go:168] "Request Body" body=""
	I1210 06:29:49.377790  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:49.378153  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:49.878038  401365 type.go:168] "Request Body" body=""
	I1210 06:29:49.878111  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:49.878364  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:50.096947  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:29:50.160267  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:50.160316  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:50.160396  401365 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:29:50.163553  401365 out.go:179] * Enabled addons: 
	I1210 06:29:50.167218  401365 addons.go:530] duration metric: took 1m39.789022145s for enable addons: enabled=[]
	I1210 06:29:50.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:29:50.377718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:50.378070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:50.877691  401365 type.go:168] "Request Body" body=""
	I1210 06:29:50.877785  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:50.878103  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:51.378394  401365 type.go:168] "Request Body" body=""
	I1210 06:29:51.378472  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:51.378746  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:51.378813  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:51.878588  401365 type.go:168] "Request Body" body=""
	I1210 06:29:51.878669  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:51.878981  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:52.377564  401365 type.go:168] "Request Body" body=""
	I1210 06:29:52.377654  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:52.378002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:52.878385  401365 type.go:168] "Request Body" body=""
	I1210 06:29:52.878456  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:52.878735  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:53.378623  401365 type.go:168] "Request Body" body=""
	I1210 06:29:53.378696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:53.379007  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:53.379062  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:53.877727  401365 type.go:168] "Request Body" body=""
	I1210 06:29:53.877818  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:53.878163  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:54.377608  401365 type.go:168] "Request Body" body=""
	I1210 06:29:54.377697  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:54.378015  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:54.877713  401365 type.go:168] "Request Body" body=""
	I1210 06:29:54.877810  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:54.878175  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:55.377895  401365 type.go:168] "Request Body" body=""
	I1210 06:29:55.377968  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:55.378309  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:55.877991  401365 type.go:168] "Request Body" body=""
	I1210 06:29:55.878064  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:55.878416  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:55.878476  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:56.378216  401365 type.go:168] "Request Body" body=""
	I1210 06:29:56.378295  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:56.378666  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:56.878479  401365 type.go:168] "Request Body" body=""
	I1210 06:29:56.878557  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:56.878903  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:57.378397  401365 type.go:168] "Request Body" body=""
	I1210 06:29:57.378477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:57.378742  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:57.878382  401365 type.go:168] "Request Body" body=""
	I1210 06:29:57.878456  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:57.878755  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:57.878801  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:58.378559  401365 type.go:168] "Request Body" body=""
	I1210 06:29:58.378645  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:58.378936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:58.877600  401365 type.go:168] "Request Body" body=""
	I1210 06:29:58.877684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:58.877957  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:59.377641  401365 type.go:168] "Request Body" body=""
	I1210 06:29:59.377718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:59.378043  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:59.878036  401365 type.go:168] "Request Body" body=""
	I1210 06:29:59.878111  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:59.878453  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:00.403040  401365 type.go:168] "Request Body" body=""
	I1210 06:30:00.403489  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:00.403971  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:00.404065  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:00.877628  401365 type.go:168] "Request Body" body=""
	I1210 06:30:00.877715  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:00.878111  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:01.378405  401365 type.go:168] "Request Body" body=""
	I1210 06:30:01.378490  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:01.378858  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:01.878587  401365 type.go:168] "Request Body" body=""
	I1210 06:30:01.878670  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:01.879048  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:02.377809  401365 type.go:168] "Request Body" body=""
	I1210 06:30:02.377884  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:02.378218  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:02.877618  401365 type.go:168] "Request Body" body=""
	I1210 06:30:02.877691  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:02.877969  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:02.878012  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:03.377736  401365 type.go:168] "Request Body" body=""
	I1210 06:30:03.377819  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:03.378180  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:03.877919  401365 type.go:168] "Request Body" body=""
	I1210 06:30:03.878016  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:03.878393  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:04.378222  401365 type.go:168] "Request Body" body=""
	I1210 06:30:04.378313  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:04.378635  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:04.878431  401365 type.go:168] "Request Body" body=""
	I1210 06:30:04.878505  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:04.879753  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1210 06:30:04.879813  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:05.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:30:05.378482  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:05.378830  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:05.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:30:05.878480  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:05.878741  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:06.378628  401365 type.go:168] "Request Body" body=""
	I1210 06:30:06.378703  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:06.379023  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:06.877690  401365 type.go:168] "Request Body" body=""
	I1210 06:30:06.877764  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:06.878119  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:07.377808  401365 type.go:168] "Request Body" body=""
	I1210 06:30:07.377895  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:07.378245  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:07.378302  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:07.877669  401365 type.go:168] "Request Body" body=""
	I1210 06:30:07.877760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:07.878098  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:08.377849  401365 type.go:168] "Request Body" body=""
	I1210 06:30:08.377929  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:08.378272  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:08.877616  401365 type.go:168] "Request Body" body=""
	I1210 06:30:08.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:08.878027  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:09.378016  401365 type.go:168] "Request Body" body=""
	I1210 06:30:09.378098  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:09.378433  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:09.378480  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:09.878345  401365 type.go:168] "Request Body" body=""
	I1210 06:30:09.878427  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:09.878801  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:10.378580  401365 type.go:168] "Request Body" body=""
	I1210 06:30:10.378704  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:10.379089  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:10.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:30:10.877745  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:10.878101  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:11.377836  401365 type.go:168] "Request Body" body=""
	I1210 06:30:11.377918  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:11.378278  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:11.877990  401365 type.go:168] "Request Body" body=""
	I1210 06:30:11.878058  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:11.878328  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:11.878370  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:12.377682  401365 type.go:168] "Request Body" body=""
	I1210 06:30:12.377762  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:12.378131  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:12.877864  401365 type.go:168] "Request Body" body=""
	I1210 06:30:12.877940  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:12.878290  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:13.377986  401365 type.go:168] "Request Body" body=""
	I1210 06:30:13.378060  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:13.378390  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:13.878180  401365 type.go:168] "Request Body" body=""
	I1210 06:30:13.878256  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:13.878586  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:13.878648  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:14.378401  401365 type.go:168] "Request Body" body=""
	I1210 06:30:14.378479  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:14.378827  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:14.878395  401365 type.go:168] "Request Body" body=""
	I1210 06:30:14.878479  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:14.878758  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:15.378543  401365 type.go:168] "Request Body" body=""
	I1210 06:30:15.378623  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:15.378945  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:15.877679  401365 type.go:168] "Request Body" body=""
	I1210 06:30:15.877759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:15.878101  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:16.377593  401365 type.go:168] "Request Body" body=""
	I1210 06:30:16.377664  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:16.377962  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:16.378009  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:16.877684  401365 type.go:168] "Request Body" body=""
	I1210 06:30:16.877760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:16.878132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:17.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:30:17.377724  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:17.378099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:17.877591  401365 type.go:168] "Request Body" body=""
	I1210 06:30:17.877703  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:17.878030  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:18.377710  401365 type.go:168] "Request Body" body=""
	I1210 06:30:18.377789  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:18.378142  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:18.378208  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:18.877756  401365 type.go:168] "Request Body" body=""
	I1210 06:30:18.877843  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:18.878196  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:19.377801  401365 type.go:168] "Request Body" body=""
	I1210 06:30:19.377880  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:19.378158  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:19.878182  401365 type.go:168] "Request Body" body=""
	I1210 06:30:19.878260  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:19.878613  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:20.378479  401365 type.go:168] "Request Body" body=""
	I1210 06:30:20.378562  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:20.378922  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:20.378995  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:20.878437  401365 type.go:168] "Request Body" body=""
	I1210 06:30:20.878515  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:20.878801  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:21.378593  401365 type.go:168] "Request Body" body=""
	I1210 06:30:21.378678  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:21.379014  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:21.877727  401365 type.go:168] "Request Body" body=""
	I1210 06:30:21.877805  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:21.878139  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:22.377644  401365 type.go:168] "Request Body" body=""
	I1210 06:30:22.377720  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:22.378036  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:22.877631  401365 type.go:168] "Request Body" body=""
	I1210 06:30:22.877708  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:22.878077  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:22.878133  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:23.377664  401365 type.go:168] "Request Body" body=""
	I1210 06:30:23.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:23.378132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:23.877609  401365 type.go:168] "Request Body" body=""
	I1210 06:30:23.877684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:23.878013  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:24.377728  401365 type.go:168] "Request Body" body=""
	I1210 06:30:24.377803  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:24.378189  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:24.878072  401365 type.go:168] "Request Body" body=""
	I1210 06:30:24.878208  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:24.878537  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:24.878592  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:25.378359  401365 type.go:168] "Request Body" body=""
	I1210 06:30:25.378444  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:25.378710  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:25.878517  401365 type.go:168] "Request Body" body=""
	I1210 06:30:25.878613  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:25.878961  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:26.377659  401365 type.go:168] "Request Body" body=""
	I1210 06:30:26.377737  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:26.378086  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:26.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:30:26.878468  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:26.878744  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:26.878791  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:27.378535  401365 type.go:168] "Request Body" body=""
	I1210 06:30:27.378611  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:27.378947  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:27.877649  401365 type.go:168] "Request Body" body=""
	I1210 06:30:27.877732  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:27.878085  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:28.377643  401365 type.go:168] "Request Body" body=""
	I1210 06:30:28.377718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:28.378171  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:28.877894  401365 type.go:168] "Request Body" body=""
	I1210 06:30:28.877977  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:28.878324  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:29.378072  401365 type.go:168] "Request Body" body=""
	I1210 06:30:29.378156  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:29.378530  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:29.378586  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:29.878257  401365 type.go:168] "Request Body" body=""
	I1210 06:30:29.878331  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:29.878620  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:30.377624  401365 type.go:168] "Request Body" body=""
	I1210 06:30:30.377719  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:30.378103  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:30.877807  401365 type.go:168] "Request Body" body=""
	I1210 06:30:30.877939  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:30.878264  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:31.377983  401365 type.go:168] "Request Body" body=""
	I1210 06:30:31.378059  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:31.378337  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:31.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:30:31.877748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:31.878104  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:31.878164  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:32.377881  401365 type.go:168] "Request Body" body=""
	I1210 06:30:32.377966  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:32.378312  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:32.877995  401365 type.go:168] "Request Body" body=""
	I1210 06:30:32.878071  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:32.878437  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:33.378235  401365 type.go:168] "Request Body" body=""
	I1210 06:30:33.378311  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:33.378664  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:33.878393  401365 type.go:168] "Request Body" body=""
	I1210 06:30:33.878477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:33.878789  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:33.878839  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:34.378385  401365 type.go:168] "Request Body" body=""
	I1210 06:30:34.378460  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:34.378856  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:34.877875  401365 type.go:168] "Request Body" body=""
	I1210 06:30:34.877953  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:34.878307  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:35.377692  401365 type.go:168] "Request Body" body=""
	I1210 06:30:35.377807  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:35.378225  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:35.877600  401365 type.go:168] "Request Body" body=""
	I1210 06:30:35.877677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:35.878020  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:36.377715  401365 type.go:168] "Request Body" body=""
	I1210 06:30:36.377791  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:36.378143  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:36.378205  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:36.877696  401365 type.go:168] "Request Body" body=""
	I1210 06:30:36.877774  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:36.878137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:37.378398  401365 type.go:168] "Request Body" body=""
	I1210 06:30:37.378477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:37.378746  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:37.878553  401365 type.go:168] "Request Body" body=""
	I1210 06:30:37.878672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:37.879091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:38.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:30:38.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:38.378109  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:38.877617  401365 type.go:168] "Request Body" body=""
	I1210 06:30:38.877690  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:38.877965  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:38.878020  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:39.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:30:39.377729  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:39.378078  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:39.877852  401365 type.go:168] "Request Body" body=""
	I1210 06:30:39.878005  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:39.878296  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:40.378297  401365 type.go:168] "Request Body" body=""
	I1210 06:30:40.378419  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:40.378746  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:40.878609  401365 type.go:168] "Request Body" body=""
	I1210 06:30:40.878695  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:40.879047  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:40.879109  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:41.377677  401365 type.go:168] "Request Body" body=""
	I1210 06:30:41.377761  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:41.378136  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:41.877816  401365 type.go:168] "Request Body" body=""
	I1210 06:30:41.877897  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:41.878247  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:42.377681  401365 type.go:168] "Request Body" body=""
	I1210 06:30:42.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:42.378160  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:42.877905  401365 type.go:168] "Request Body" body=""
	I1210 06:30:42.877988  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:42.878334  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:43.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:30:43.377686  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:43.378002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:43.378054  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:43.877669  401365 type.go:168] "Request Body" body=""
	I1210 06:30:43.877751  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:43.878145  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:44.377872  401365 type.go:168] "Request Body" body=""
	I1210 06:30:44.377977  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:44.378341  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:44.878225  401365 type.go:168] "Request Body" body=""
	I1210 06:30:44.878299  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:44.878563  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:45.378360  401365 type.go:168] "Request Body" body=""
	I1210 06:30:45.378435  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:45.378860  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:45.378937  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:45.878557  401365 type.go:168] "Request Body" body=""
	I1210 06:30:45.878640  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:45.878996  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:46.378357  401365 type.go:168] "Request Body" body=""
	I1210 06:30:46.378429  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:46.378738  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:46.878533  401365 type.go:168] "Request Body" body=""
	I1210 06:30:46.878610  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:46.878947  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:47.377691  401365 type.go:168] "Request Body" body=""
	I1210 06:30:47.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:47.378135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:47.878384  401365 type.go:168] "Request Body" body=""
	I1210 06:30:47.878498  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:47.878783  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:47.878827  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:48.378583  401365 type.go:168] "Request Body" body=""
	I1210 06:30:48.378670  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:48.379006  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:48.877596  401365 type.go:168] "Request Body" body=""
	I1210 06:30:48.877674  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:48.878023  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:49.377609  401365 type.go:168] "Request Body" body=""
	I1210 06:30:49.377685  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:49.377965  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:49.877909  401365 type.go:168] "Request Body" body=""
	I1210 06:30:49.877985  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:49.878310  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:50.378111  401365 type.go:168] "Request Body" body=""
	I1210 06:30:50.378203  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:50.378557  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:50.378619  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:50.878363  401365 type.go:168] "Request Body" body=""
	I1210 06:30:50.878438  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:50.878702  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:51.378562  401365 type.go:168] "Request Body" body=""
	I1210 06:30:51.378644  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:51.378985  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:51.877673  401365 type.go:168] "Request Body" body=""
	I1210 06:30:51.877755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:51.878129  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:52.377603  401365 type.go:168] "Request Body" body=""
	I1210 06:30:52.377672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:52.377985  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:52.877662  401365 type.go:168] "Request Body" body=""
	I1210 06:30:52.877741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:52.878113  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:52.878172  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:53.377842  401365 type.go:168] "Request Body" body=""
	I1210 06:30:53.377929  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:53.378271  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:53.877988  401365 type.go:168] "Request Body" body=""
	I1210 06:30:53.878059  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:53.878397  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:54.378229  401365 type.go:168] "Request Body" body=""
	I1210 06:30:54.378302  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:54.378632  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:54.878381  401365 type.go:168] "Request Body" body=""
	I1210 06:30:54.878460  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:54.878809  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:54.878867  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:55.378406  401365 type.go:168] "Request Body" body=""
	I1210 06:30:55.378491  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:55.378761  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:55.878532  401365 type.go:168] "Request Body" body=""
	I1210 06:30:55.878631  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:55.878979  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:56.377687  401365 type.go:168] "Request Body" body=""
	I1210 06:30:56.377765  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:56.378102  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:56.878412  401365 type.go:168] "Request Body" body=""
	I1210 06:30:56.878480  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:56.878765  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:57.378590  401365 type.go:168] "Request Body" body=""
	I1210 06:30:57.378667  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:57.379010  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:57.379066  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:57.877659  401365 type.go:168] "Request Body" body=""
	I1210 06:30:57.877736  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:57.878094  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:58.377804  401365 type.go:168] "Request Body" body=""
	I1210 06:30:58.377882  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:58.378161  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:58.877653  401365 type.go:168] "Request Body" body=""
	I1210 06:30:58.877724  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:58.878038  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:59.377678  401365 type.go:168] "Request Body" body=""
	I1210 06:30:59.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:59.378090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:59.878022  401365 type.go:168] "Request Body" body=""
	I1210 06:30:59.878105  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:59.878446  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:59.878509  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:00.377586  401365 type.go:168] "Request Body" body=""
	I1210 06:31:00.377680  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:00.378151  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:00.877892  401365 type.go:168] "Request Body" body=""
	I1210 06:31:00.877975  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:00.878336  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:01.377928  401365 type.go:168] "Request Body" body=""
	I1210 06:31:01.378000  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:01.378269  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:01.877906  401365 type.go:168] "Request Body" body=""
	I1210 06:31:01.877996  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:01.878438  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:02.377746  401365 type.go:168] "Request Body" body=""
	I1210 06:31:02.377823  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:02.378191  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:02.378256  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:02.878389  401365 type.go:168] "Request Body" body=""
	I1210 06:31:02.878461  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:02.878756  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:03.378549  401365 type.go:168] "Request Body" body=""
	I1210 06:31:03.378628  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:03.378977  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:03.877675  401365 type.go:168] "Request Body" body=""
	I1210 06:31:03.877754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:03.878104  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:04.377643  401365 type.go:168] "Request Body" body=""
	I1210 06:31:04.377719  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:04.378069  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:04.878124  401365 type.go:168] "Request Body" body=""
	I1210 06:31:04.878218  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:04.878572  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:04.878635  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:05.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:31:05.378481  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:05.378786  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:05.878376  401365 type.go:168] "Request Body" body=""
	I1210 06:31:05.878447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:05.878782  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:06.378579  401365 type.go:168] "Request Body" body=""
	I1210 06:31:06.378655  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:06.379033  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:06.877752  401365 type.go:168] "Request Body" body=""
	I1210 06:31:06.877828  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:06.878145  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:07.377614  401365 type.go:168] "Request Body" body=""
	I1210 06:31:07.377703  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:07.378053  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:07.378103  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:07.877679  401365 type.go:168] "Request Body" body=""
	I1210 06:31:07.877774  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:07.878132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:08.377692  401365 type.go:168] "Request Body" body=""
	I1210 06:31:08.377773  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:08.378135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:08.877811  401365 type.go:168] "Request Body" body=""
	I1210 06:31:08.877884  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:08.878180  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:09.377668  401365 type.go:168] "Request Body" body=""
	I1210 06:31:09.377754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:09.378101  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:09.378155  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:09.877923  401365 type.go:168] "Request Body" body=""
	I1210 06:31:09.877999  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:09.878321  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:10.378307  401365 type.go:168] "Request Body" body=""
	I1210 06:31:10.378386  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:10.378650  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:10.878423  401365 type.go:168] "Request Body" body=""
	I1210 06:31:10.878500  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:10.878869  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:11.378503  401365 type.go:168] "Request Body" body=""
	I1210 06:31:11.378584  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:11.378952  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:11.379008  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:11.878378  401365 type.go:168] "Request Body" body=""
	I1210 06:31:11.878450  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:11.878715  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:12.378514  401365 type.go:168] "Request Body" body=""
	I1210 06:31:12.378591  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:12.378905  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:12.877633  401365 type.go:168] "Request Body" body=""
	I1210 06:31:12.877711  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:12.878080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:13.378362  401365 type.go:168] "Request Body" body=""
	I1210 06:31:13.378431  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:13.378728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:13.878515  401365 type.go:168] "Request Body" body=""
	I1210 06:31:13.878592  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:13.878918  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:13.878976  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:14.377681  401365 type.go:168] "Request Body" body=""
	I1210 06:31:14.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:14.378115  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:14.878072  401365 type.go:168] "Request Body" body=""
	I1210 06:31:14.878147  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:14.878405  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:15.378262  401365 type.go:168] "Request Body" body=""
	I1210 06:31:15.378345  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:15.378686  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:15.878492  401365 type.go:168] "Request Body" body=""
	I1210 06:31:15.878569  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:15.878935  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:16.378356  401365 type.go:168] "Request Body" body=""
	I1210 06:31:16.378441  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:16.378690  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:16.378731  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:16.878535  401365 type.go:168] "Request Body" body=""
	I1210 06:31:16.878609  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:16.878944  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:17.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:31:17.377753  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:17.378118  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:17.877723  401365 type.go:168] "Request Body" body=""
	I1210 06:31:17.877797  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:17.878157  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:18.377666  401365 type.go:168] "Request Body" body=""
	I1210 06:31:18.377744  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:18.378082  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:18.877660  401365 type.go:168] "Request Body" body=""
	I1210 06:31:18.877734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:18.878083  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:18.878141  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:19.378341  401365 type.go:168] "Request Body" body=""
	I1210 06:31:19.378417  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:19.378680  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:19.878385  401365 type.go:168] "Request Body" body=""
	I1210 06:31:19.878484  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:19.878844  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:20.377537  401365 type.go:168] "Request Body" body=""
	I1210 06:31:20.377620  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:20.377967  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:20.877662  401365 type.go:168] "Request Body" body=""
	I1210 06:31:20.877748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:20.878176  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:20.878224  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:21.377644  401365 type.go:168] "Request Body" body=""
	I1210 06:31:21.377723  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:21.378064  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:21.877799  401365 type.go:168] "Request Body" body=""
	I1210 06:31:21.877892  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:21.878256  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:22.377991  401365 type.go:168] "Request Body" body=""
	I1210 06:31:22.378069  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:22.378361  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:22.877671  401365 type.go:168] "Request Body" body=""
	I1210 06:31:22.877765  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:22.878106  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:23.377677  401365 type.go:168] "Request Body" body=""
	I1210 06:31:23.377772  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:23.378156  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:23.378228  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:23.877603  401365 type.go:168] "Request Body" body=""
	I1210 06:31:23.877676  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:23.877972  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:24.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:31:24.377753  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:24.378120  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:24.877983  401365 type.go:168] "Request Body" body=""
	I1210 06:31:24.878078  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:24.878441  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:25.378227  401365 type.go:168] "Request Body" body=""
	I1210 06:31:25.378296  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:25.378552  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:25.378598  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:25.878364  401365 type.go:168] "Request Body" body=""
	I1210 06:31:25.878444  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:25.878791  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:26.377537  401365 type.go:168] "Request Body" body=""
	I1210 06:31:26.377611  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:26.377959  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:26.878388  401365 type.go:168] "Request Body" body=""
	I1210 06:31:26.878456  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:26.878725  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:27.378513  401365 type.go:168] "Request Body" body=""
	I1210 06:31:27.378591  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:27.378938  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:27.378993  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:27.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:31:27.877739  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:27.878092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:28.378425  401365 type.go:168] "Request Body" body=""
	I1210 06:31:28.378506  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:28.378821  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:28.877546  401365 type.go:168] "Request Body" body=""
	I1210 06:31:28.877631  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:28.878002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:29.377725  401365 type.go:168] "Request Body" body=""
	I1210 06:31:29.377802  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:29.378176  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:29.878060  401365 type.go:168] "Request Body" body=""
	I1210 06:31:29.878133  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:29.878404  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:29.878448  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:30.378433  401365 type.go:168] "Request Body" body=""
	I1210 06:31:30.378508  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:30.378874  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:30.877621  401365 type.go:168] "Request Body" body=""
	I1210 06:31:30.877699  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:30.878072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:31.377633  401365 type.go:168] "Request Body" body=""
	I1210 06:31:31.377704  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:31.378026  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:31.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:31:31.877748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:31.878092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:32.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:31:32.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:32.378156  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:32.378215  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:32.878393  401365 type.go:168] "Request Body" body=""
	I1210 06:31:32.878461  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:32.878721  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:33.378508  401365 type.go:168] "Request Body" body=""
	I1210 06:31:33.378585  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:33.379111  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:33.877686  401365 type.go:168] "Request Body" body=""
	I1210 06:31:33.877775  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:33.878146  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:34.377671  401365 type.go:168] "Request Body" body=""
	I1210 06:31:34.377743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:34.378043  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:34.877949  401365 type.go:168] "Request Body" body=""
	I1210 06:31:34.878028  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:34.878374  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:34.878438  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:35.378226  401365 type.go:168] "Request Body" body=""
	I1210 06:31:35.378306  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:35.378649  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:35.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:31:35.878471  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:35.878748  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:36.378551  401365 type.go:168] "Request Body" body=""
	I1210 06:31:36.378631  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:36.378948  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:36.877548  401365 type.go:168] "Request Body" body=""
	I1210 06:31:36.877626  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:36.877995  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:37.378404  401365 type.go:168] "Request Body" body=""
	I1210 06:31:37.378472  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:37.378739  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:37.378783  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:37.878571  401365 type.go:168] "Request Body" body=""
	I1210 06:31:37.878646  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:37.878969  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:38.378416  401365 type.go:168] "Request Body" body=""
	I1210 06:31:38.378491  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:38.378834  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:38.878423  401365 type.go:168] "Request Body" body=""
	I1210 06:31:38.878499  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:38.878770  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:39.378611  401365 type.go:168] "Request Body" body=""
	I1210 06:31:39.378694  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:39.379044  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:39.379105  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:39.878018  401365 type.go:168] "Request Body" body=""
	I1210 06:31:39.878102  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:39.878461  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:40.378264  401365 type.go:168] "Request Body" body=""
	I1210 06:31:40.378348  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:40.378617  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:40.878427  401365 type.go:168] "Request Body" body=""
	I1210 06:31:40.878510  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:40.878851  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:41.377583  401365 type.go:168] "Request Body" body=""
	I1210 06:31:41.377658  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:41.377991  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:41.877560  401365 type.go:168] "Request Body" body=""
	I1210 06:31:41.877633  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:41.877903  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:41.877948  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:42.377649  401365 type.go:168] "Request Body" body=""
	I1210 06:31:42.377727  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:42.378093  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:42.877664  401365 type.go:168] "Request Body" body=""
	I1210 06:31:42.877739  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:42.878032  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:43.378436  401365 type.go:168] "Request Body" body=""
	I1210 06:31:43.378507  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:43.378831  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:43.878454  401365 type.go:168] "Request Body" body=""
	I1210 06:31:43.878546  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:43.878900  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:43.878962  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:44.378527  401365 type.go:168] "Request Body" body=""
	I1210 06:31:44.378608  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:44.378911  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:44.877852  401365 type.go:168] "Request Body" body=""
	I1210 06:31:44.877944  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:44.878230  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:45.377683  401365 type.go:168] "Request Body" body=""
	I1210 06:31:45.377757  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:45.378232  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:45.877964  401365 type.go:168] "Request Body" body=""
	I1210 06:31:45.878060  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:45.878412  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:46.378182  401365 type.go:168] "Request Body" body=""
	I1210 06:31:46.378267  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.378573  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:46.378621  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:46.878427  401365 type.go:168] "Request Body" body=""
	I1210 06:31:46.878510  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.878849  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.378554  401365 type.go:168] "Request Body" body=""
	I1210 06:31:47.378637  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.379010  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.878381  401365 type.go:168] "Request Body" body=""
	I1210 06:31:47.878453  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.878751  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:48.378580  401365 type.go:168] "Request Body" body=""
	I1210 06:31:48.378660  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.378984  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:48.379037  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:48.877565  401365 type.go:168] "Request Body" body=""
	I1210 06:31:48.877642  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.877972  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.378371  401365 type.go:168] "Request Body" body=""
	I1210 06:31:49.378448  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.378712  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.878386  401365 type.go:168] "Request Body" body=""
	I1210 06:31:49.878461  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.878790  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:50.377587  401365 type.go:168] "Request Body" body=""
	I1210 06:31:50.377673  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.378035  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:50.878395  401365 type.go:168] "Request Body" body=""
	I1210 06:31:50.878469  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.878754  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:50.878808  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:51.378548  401365 type.go:168] "Request Body" body=""
	I1210 06:31:51.378627  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.378976  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:51.877595  401365 type.go:168] "Request Body" body=""
	I1210 06:31:51.877677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.878064  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:52.378358  401365 type.go:168] "Request Body" body=""
	I1210 06:31:52.378433  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.378695  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:52.878474  401365 type.go:168] "Request Body" body=""
	I1210 06:31:52.878551  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.878895  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:52.878957  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:53.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:31:53.377721  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.378047  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:53.877607  401365 type.go:168] "Request Body" body=""
	I1210 06:31:53.877682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.878066  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:31:54.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.378069  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.877984  401365 type.go:168] "Request Body" body=""
	I1210 06:31:54.878068  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.878451  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:55.378227  401365 type.go:168] "Request Body" body=""
	I1210 06:31:55.378305  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.378567  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:55.378612  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:55.878449  401365 type.go:168] "Request Body" body=""
	I1210 06:31:55.878524  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.878878  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.377607  401365 type.go:168] "Request Body" body=""
	I1210 06:31:56.377684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.378031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:31:56.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.878731  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:57.378523  401365 type.go:168] "Request Body" body=""
	I1210 06:31:57.378605  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.378963  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:57.379024  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:57.878422  401365 type.go:168] "Request Body" body=""
	I1210 06:31:57.878496  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.878837  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:58.378369  401365 type.go:168] "Request Body" body=""
	I1210 06:31:58.378450  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.378724  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:58.878516  401365 type.go:168] "Request Body" body=""
	I1210 06:31:58.878590  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.878936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:31:59.377756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.378079  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.878003  401365 type.go:168] "Request Body" body=""
	I1210 06:31:59.878079  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.878346  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:59.878388  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:00.378620  401365 type.go:168] "Request Body" body=""
	I1210 06:32:00.378720  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.379187  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:00.877753  401365 type.go:168] "Request Body" body=""
	I1210 06:32:00.877830  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.878187  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.377623  401365 type.go:168] "Request Body" body=""
	I1210 06:32:01.377694  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.377960  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.877717  401365 type.go:168] "Request Body" body=""
	I1210 06:32:01.877791  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.878137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:02.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:32:02.377750  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.378099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:02.378152  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:02.878416  401365 type.go:168] "Request Body" body=""
	I1210 06:32:02.878493  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.878764  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.378615  401365 type.go:168] "Request Body" body=""
	I1210 06:32:03.378694  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.379016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.877719  401365 type.go:168] "Request Body" body=""
	I1210 06:32:03.877801  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.878168  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:04.377604  401365 type.go:168] "Request Body" body=""
	I1210 06:32:04.377682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.378022  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:04.878029  401365 type.go:168] "Request Body" body=""
	I1210 06:32:04.878113  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.878426  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:04.878477  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:05.378217  401365 type.go:168] "Request Body" body=""
	I1210 06:32:05.378293  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.378623  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:05.878242  401365 type.go:168] "Request Body" body=""
	I1210 06:32:05.878313  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.878586  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:06.378446  401365 type.go:168] "Request Body" body=""
	I1210 06:32:06.378528  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.378861  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:06.877578  401365 type.go:168] "Request Body" body=""
	I1210 06:32:06.877651  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.877991  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:07.378348  401365 type.go:168] "Request Body" body=""
	I1210 06:32:07.378430  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.378696  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:07.378747  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:07.878485  401365 type.go:168] "Request Body" body=""
	I1210 06:32:07.878566  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.878891  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.377683  401365 type.go:168] "Request Body" body=""
	I1210 06:32:08.377758  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.378068  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.877617  401365 type.go:168] "Request Body" body=""
	I1210 06:32:08.877686  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.877996  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:09.377667  401365 type.go:168] "Request Body" body=""
	I1210 06:32:09.377749  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.378070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:09.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:32:09.878526  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.878847  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:09.878895  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:10.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:32:10.377695  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.377992  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:10.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:32:10.877747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.878107  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:11.377752  401365 type.go:168] "Request Body" body=""
	I1210 06:32:11.377832  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.378194  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:11.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:32:11.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.878721  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:12.378536  401365 type.go:168] "Request Body" body=""
	I1210 06:32:12.378609  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.379037  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:12.379094  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:12.877634  401365 type.go:168] "Request Body" body=""
	I1210 06:32:12.877718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.878024  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:13.377615  401365 type.go:168] "Request Body" body=""
	I1210 06:32:13.377684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.377949  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:13.877636  401365 type.go:168] "Request Body" body=""
	I1210 06:32:13.877713  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.878091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.377642  401365 type.go:168] "Request Body" body=""
	I1210 06:32:14.377717  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.378074  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.877991  401365 type.go:168] "Request Body" body=""
	I1210 06:32:14.878073  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.878405  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:14.878468  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:15.378244  401365 type.go:168] "Request Body" body=""
	I1210 06:32:15.378316  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.378669  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:15.878506  401365 type.go:168] "Request Body" body=""
	I1210 06:32:15.878598  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.878952  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:16.378402  401365 type.go:168] "Request Body" body=""
	I1210 06:32:16.378473  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.378735  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:16.878581  401365 type.go:168] "Request Body" body=""
	I1210 06:32:16.878668  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.879029  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:16.879085  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:17.377664  401365 type.go:168] "Request Body" body=""
	I1210 06:32:17.377738  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.378065  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:17.877603  401365 type.go:168] "Request Body" body=""
	I1210 06:32:17.877677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.877943  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:18.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:32:18.377754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.378106  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:18.877827  401365 type.go:168] "Request Body" body=""
	I1210 06:32:18.877917  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.878299  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:19.377981  401365 type.go:168] "Request Body" body=""
	I1210 06:32:19.378062  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.378390  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:19.378451  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:19.878242  401365 type.go:168] "Request Body" body=""
	I1210 06:32:19.878318  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.878664  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.377555  401365 type.go:168] "Request Body" body=""
	I1210 06:32:20.377633  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.377966  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.877592  401365 type.go:168] "Request Body" body=""
	I1210 06:32:20.877663  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.878022  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:21.377596  401365 type.go:168] "Request Body" body=""
	I1210 06:32:21.377677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.377971  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:21.877671  401365 type.go:168] "Request Body" body=""
	I1210 06:32:21.877747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.878078  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:21.878135  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:22.377603  401365 type.go:168] "Request Body" body=""
	I1210 06:32:22.377681  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.377998  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:22.877713  401365 type.go:168] "Request Body" body=""
	I1210 06:32:22.877789  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.878146  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.378586  401365 type.go:168] "Request Body" body=""
	I1210 06:32:23.378663  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.379023  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.877627  401365 type.go:168] "Request Body" body=""
	I1210 06:32:23.877698  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.878027  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:24.377671  401365 type.go:168] "Request Body" body=""
	I1210 06:32:24.377755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.378140  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:24.378210  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:24.878158  401365 type.go:168] "Request Body" body=""
	I1210 06:32:24.878240  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.878611  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:25.378254  401365 type.go:168] "Request Body" body=""
	I1210 06:32:25.378329  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.378601  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:25.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:32:25.878460  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.878767  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:26.378460  401365 type.go:168] "Request Body" body=""
	I1210 06:32:26.378534  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.378923  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:26.378977  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:26.878379  401365 type.go:168] "Request Body" body=""
	I1210 06:32:26.878453  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.878804  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:27.378593  401365 type.go:168] "Request Body" body=""
	I1210 06:32:27.378674  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.379034  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:27.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:32:27.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.878091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.378401  401365 type.go:168] "Request Body" body=""
	I1210 06:32:28.378470  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.378735  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.878509  401365 type.go:168] "Request Body" body=""
	I1210 06:32:28.878592  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.878904  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:28.878959  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:29.377676  401365 type.go:168] "Request Body" body=""
	I1210 06:32:29.377758  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.378099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:29.877932  401365 type.go:168] "Request Body" body=""
	I1210 06:32:29.878011  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.878331  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.378437  401365 type.go:168] "Request Body" body=""
	I1210 06:32:30.378520  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.378881  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.877601  401365 type.go:168] "Request Body" body=""
	I1210 06:32:30.877679  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.877997  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:31.378413  401365 type.go:168] "Request Body" body=""
	I1210 06:32:31.378485  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.378800  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:31.378859  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:31.877545  401365 type.go:168] "Request Body" body=""
	I1210 06:32:31.877620  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.877962  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.377685  401365 type.go:168] "Request Body" body=""
	I1210 06:32:32.377765  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.378115  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.878382  401365 type.go:168] "Request Body" body=""
	I1210 06:32:32.878458  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.878718  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:33.378533  401365 type.go:168] "Request Body" body=""
	I1210 06:32:33.378613  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.378973  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:33.379031  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:33.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:32:33.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.878099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:34.377573  401365 type.go:168] "Request Body" body=""
	I1210 06:32:34.377644  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.377911  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:34.877902  401365 type.go:168] "Request Body" body=""
	I1210 06:32:34.877978  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.878339  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.378057  401365 type.go:168] "Request Body" body=""
	I1210 06:32:35.378143  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.378506  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.878224  401365 type.go:168] "Request Body" body=""
	I1210 06:32:35.878295  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.878562  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:35.878604  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:36.378404  401365 type.go:168] "Request Body" body=""
	I1210 06:32:36.378487  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.378840  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:36.877571  401365 type.go:168] "Request Body" body=""
	I1210 06:32:36.877653  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.877994  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.378346  401365 type.go:168] "Request Body" body=""
	I1210 06:32:37.378421  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.378684  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.878461  401365 type.go:168] "Request Body" body=""
	I1210 06:32:37.878543  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.878890  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:37.878952  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:38.378573  401365 type.go:168] "Request Body" body=""
	I1210 06:32:38.378654  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.378951  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:38.878358  401365 type.go:168] "Request Body" body=""
	I1210 06:32:38.878428  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.878691  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.378473  401365 type.go:168] "Request Body" body=""
	I1210 06:32:39.378552  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.378939  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.877654  401365 type.go:168] "Request Body" body=""
	I1210 06:32:39.877738  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.878074  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:40.377853  401365 type.go:168] "Request Body" body=""
	I1210 06:32:40.377926  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.378227  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:40.378275  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:40.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:32:40.877741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.878110  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.377673  401365 type.go:168] "Request Body" body=""
	I1210 06:32:41.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.378080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.878456  401365 type.go:168] "Request Body" body=""
	I1210 06:32:41.878528  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.878849  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:32:42.377701  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.378097  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.877683  401365 type.go:168] "Request Body" body=""
	I1210 06:32:42.877759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.878128  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:42.878186  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:43.378375  401365 type.go:168] "Request Body" body=""
	I1210 06:32:43.378451  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.378720  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:43.878495  401365 type.go:168] "Request Body" body=""
	I1210 06:32:43.878566  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.878911  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.378610  401365 type.go:168] "Request Body" body=""
	I1210 06:32:44.378700  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.379090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.877962  401365 type.go:168] "Request Body" body=""
	I1210 06:32:44.878030  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.878300  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:44.878343  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:45.377682  401365 type.go:168] "Request Body" body=""
	I1210 06:32:45.377763  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.378114  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:45.877818  401365 type.go:168] "Request Body" body=""
	I1210 06:32:45.877892  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.878234  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.377583  401365 type.go:168] "Request Body" body=""
	I1210 06:32:46.377660  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.377917  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:32:46.877751  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.878148  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:47.377793  401365 type.go:168] "Request Body" body=""
	I1210 06:32:47.377870  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.378225  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:47.378277  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:47.877609  401365 type.go:168] "Request Body" body=""
	I1210 06:32:47.877689  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.877999  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:48.377617  401365 type.go:168] "Request Body" body=""
	I1210 06:32:48.377714  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.378121  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:48.877709  401365 type.go:168] "Request Body" body=""
	I1210 06:32:48.877795  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.878141  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.377627  401365 type.go:168] "Request Body" body=""
	I1210 06:32:49.377713  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.378005  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.878006  401365 type.go:168] "Request Body" body=""
	I1210 06:32:49.878085  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.878433  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:49.878488  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:50.378322  401365 type.go:168] "Request Body" body=""
	I1210 06:32:50.378398  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.378718  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:50.878347  401365 type.go:168] "Request Body" body=""
	I1210 06:32:50.878420  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.878687  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.378558  401365 type.go:168] "Request Body" body=""
	I1210 06:32:51.378630  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.378973  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:32:51.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.878061  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:52.377607  401365 type.go:168] "Request Body" body=""
	I1210 06:32:52.377682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.377965  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:52.378014  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:52.877660  401365 type.go:168] "Request Body" body=""
	I1210 06:32:52.877736  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.878070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:53.377675  401365 type.go:168] "Request Body" body=""
	I1210 06:32:53.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.378128  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:53.878388  401365 type.go:168] "Request Body" body=""
	I1210 06:32:53.878462  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.878725  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:54.378466  401365 type.go:168] "Request Body" body=""
	I1210 06:32:54.378536  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.378857  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:54.378913  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:54.877680  401365 type.go:168] "Request Body" body=""
	I1210 06:32:54.877756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.878119  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:55.378458  401365 type.go:168] "Request Body" body=""
	I1210 06:32:55.378526  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.378782  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:55.878544  401365 type.go:168] "Request Body" body=""
	I1210 06:32:55.878626  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.878951  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.377665  401365 type.go:168] "Request Body" body=""
	I1210 06:32:56.377741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.378096  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.878361  401365 type.go:168] "Request Body" body=""
	I1210 06:32:56.878436  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.878736  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:56.878785  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:57.377545  401365 type.go:168] "Request Body" body=""
	I1210 06:32:57.377621  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.377956  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:57.877652  401365 type.go:168] "Request Body" body=""
	I1210 06:32:57.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.878070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.377628  401365 type.go:168] "Request Body" body=""
	I1210 06:32:58.377706  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.378022  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.877657  401365 type.go:168] "Request Body" body=""
	I1210 06:32:58.877735  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.878058  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:59.377658  401365 type.go:168] "Request Body" body=""
	I1210 06:32:59.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.378092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:59.378152  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:59.877990  401365 type.go:168] "Request Body" body=""
	I1210 06:32:59.878106  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.878540  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:00.378642  401365 type.go:168] "Request Body" body=""
	I1210 06:33:00.378734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.379157  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:00.877676  401365 type.go:168] "Request Body" body=""
	I1210 06:33:00.877756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.878108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:01.377615  401365 type.go:168] "Request Body" body=""
	I1210 06:33:01.377687  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.377982  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:01.877579  401365 type.go:168] "Request Body" body=""
	I1210 06:33:01.877659  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.877980  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:01.878035  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:02.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:33:02.377769  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.378112  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:02.877622  401365 type.go:168] "Request Body" body=""
	I1210 06:33:02.877696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.878019  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.378424  401365 type.go:168] "Request Body" body=""
	I1210 06:33:03.378503  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.378851  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.877594  401365 type.go:168] "Request Body" body=""
	I1210 06:33:03.877673  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.878031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:03.878095  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:04.377623  401365 type.go:168] "Request Body" body=""
	I1210 06:33:04.377695  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.378016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:04.878008  401365 type.go:168] "Request Body" body=""
	I1210 06:33:04.878082  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.878402  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.378189  401365 type.go:168] "Request Body" body=""
	I1210 06:33:05.378264  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.378599  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.878376  401365 type.go:168] "Request Body" body=""
	I1210 06:33:05.878455  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.878734  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:05.878779  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:06.378572  401365 type.go:168] "Request Body" body=""
	I1210 06:33:06.378660  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.379002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:06.877676  401365 type.go:168] "Request Body" body=""
	I1210 06:33:06.877754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.878110  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.378442  401365 type.go:168] "Request Body" body=""
	I1210 06:33:07.378521  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.378800  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.877549  401365 type.go:168] "Request Body" body=""
	I1210 06:33:07.877629  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.878000  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:08.377709  401365 type.go:168] "Request Body" body=""
	I1210 06:33:08.377785  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.378149  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:08.378206  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:08.877866  401365 type.go:168] "Request Body" body=""
	I1210 06:33:08.877938  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.878266  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.377997  401365 type.go:168] "Request Body" body=""
	I1210 06:33:09.378074  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.378430  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.878278  401365 type.go:168] "Request Body" body=""
	I1210 06:33:09.878362  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.878709  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:10.378535  401365 type.go:168] "Request Body" body=""
	I1210 06:33:10.378614  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.378892  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:10.378949  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:10.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:10.877696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.878045  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:11.377636  401365 type.go:168] "Request Body" body=""
	I1210 06:33:11.377715  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.378069  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:11.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:33:11.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.878741  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:12.378537  401365 type.go:168] "Request Body" body=""
	I1210 06:33:12.378621  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.378959  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:12.379018  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:12.877685  401365 type.go:168] "Request Body" body=""
	I1210 06:33:12.877767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.878108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:13.377595  401365 type.go:168] "Request Body" body=""
	I1210 06:33:13.377667  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.377991  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:13.877696  401365 type.go:168] "Request Body" body=""
	I1210 06:33:13.877788  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.878233  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.377670  401365 type.go:168] "Request Body" body=""
	I1210 06:33:14.377745  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.378062  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.878087  401365 type.go:168] "Request Body" body=""
	I1210 06:33:14.878167  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.878437  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:14.878481  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:15.378338  401365 type.go:168] "Request Body" body=""
	I1210 06:33:15.378427  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.378799  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:15.877556  401365 type.go:168] "Request Body" body=""
	I1210 06:33:15.877630  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.878034  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:16.378366  401365 type.go:168] "Request Body" body=""
	I1210 06:33:16.378435  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.378773  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:16.878569  401365 type.go:168] "Request Body" body=""
	I1210 06:33:16.878643  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.879012  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:16.879074  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:17.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:33:17.377747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.378122  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:17.877820  401365 type.go:168] "Request Body" body=""
	I1210 06:33:17.877897  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.878175  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:18.377654  401365 type.go:168] "Request Body" body=""
	I1210 06:33:18.377727  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.378073  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:18.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:33:18.877754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.878137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:19.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:33:19.377682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.377977  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:19.378029  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:19.877848  401365 type.go:168] "Request Body" body=""
	I1210 06:33:19.877930  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.878248  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:20.378064  401365 type.go:168] "Request Body" body=""
	I1210 06:33:20.378150  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.378561  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:20.878476  401365 type.go:168] "Request Body" body=""
	I1210 06:33:20.878552  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.878835  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:21.377583  401365 type.go:168] "Request Body" body=""
	I1210 06:33:21.377658  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.378029  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:21.378094  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:21.877680  401365 type.go:168] "Request Body" body=""
	I1210 06:33:21.877755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.878122  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:22.378420  401365 type.go:168] "Request Body" body=""
	I1210 06:33:22.378487  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.378808  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:22.877547  401365 type.go:168] "Request Body" body=""
	I1210 06:33:22.877625  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.877980  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:23.377731  401365 type.go:168] "Request Body" body=""
	I1210 06:33:23.377812  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.378164  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:23.378221  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:23.877756  401365 type.go:168] "Request Body" body=""
	I1210 06:33:23.877825  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.878080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:24.377759  401365 type.go:168] "Request Body" body=""
	I1210 06:33:24.377846  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.378207  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:24.878036  401365 type.go:168] "Request Body" body=""
	I1210 06:33:24.878119  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.878474  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:25.378280  401365 type.go:168] "Request Body" body=""
	I1210 06:33:25.378375  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.378683  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:25.378744  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:25.878089  401365 type.go:168] "Request Body" body=""
	I1210 06:33:25.878190  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.878571  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:26.378247  401365 type.go:168] "Request Body" body=""
	I1210 06:33:26.378325  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.378653  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:26.878389  401365 type.go:168] "Request Body" body=""
	I1210 06:33:26.878457  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.878720  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:27.378526  401365 type.go:168] "Request Body" body=""
	I1210 06:33:27.378607  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.378943  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:27.379002  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:27.877690  401365 type.go:168] "Request Body" body=""
	I1210 06:33:27.877775  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.878121  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:28.377561  401365 type.go:168] "Request Body" body=""
	I1210 06:33:28.377635  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.377959  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:28.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:33:28.877750  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.878089  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.378437  401365 type.go:168] "Request Body" body=""
	I1210 06:33:29.378518  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.378867  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:29.877685  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.877995  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:29.878058  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:30.377631  401365 type.go:168] "Request Body" body=""
	I1210 06:33:30.377707  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.378042  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:30.877750  401365 type.go:168] "Request Body" body=""
	I1210 06:33:30.877827  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.878132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:31.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:33:31.377692  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.377951  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:31.877635  401365 type.go:168] "Request Body" body=""
	I1210 06:33:31.877717  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.878049  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:31.878116  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:32.377678  401365 type.go:168] "Request Body" body=""
	I1210 06:33:32.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.378103  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:32.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:32.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.877995  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:33.377756  401365 type.go:168] "Request Body" body=""
	I1210 06:33:33.377840  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.378198  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:33.877915  401365 type.go:168] "Request Body" body=""
	I1210 06:33:33.877993  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.878332  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:33.878392  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:34.377635  401365 type.go:168] "Request Body" body=""
	I1210 06:33:34.377727  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.378085  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:34.878096  401365 type.go:168] "Request Body" body=""
	I1210 06:33:34.878177  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.878550  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:35.378207  401365 type.go:168] "Request Body" body=""
	I1210 06:33:35.378280  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.378622  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:35.878407  401365 type.go:168] "Request Body" body=""
	I1210 06:33:35.878481  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.878777  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:35.878833  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:36.378544  401365 type.go:168] "Request Body" body=""
	I1210 06:33:36.378618  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.378979  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:36.877667  401365 type.go:168] "Request Body" body=""
	I1210 06:33:36.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.878064  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.377603  401365 type.go:168] "Request Body" body=""
	I1210 06:33:37.377674  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.377946  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.877690  401365 type.go:168] "Request Body" body=""
	I1210 06:33:37.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.878181  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:38.377888  401365 type.go:168] "Request Body" body=""
	I1210 06:33:38.377973  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.378298  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:38.378347  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:38.877651  401365 type.go:168] "Request Body" body=""
	I1210 06:33:38.877756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.878041  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:33:39.377755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.378094  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.877930  401365 type.go:168] "Request Body" body=""
	I1210 06:33:39.878008  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.878344  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:40.378300  401365 type.go:168] "Request Body" body=""
	I1210 06:33:40.378366  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.378615  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:40.378657  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:40.878469  401365 type.go:168] "Request Body" body=""
	I1210 06:33:40.878546  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.878897  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:41.378609  401365 type.go:168] "Request Body" body=""
	I1210 06:33:41.378684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.379020  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:41.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:41.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.878041  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:42.377673  401365 type.go:168] "Request Body" body=""
	I1210 06:33:42.377747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.378116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:42.877854  401365 type.go:168] "Request Body" body=""
	I1210 06:33:42.877940  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.878285  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:42.878351  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:43.377678  401365 type.go:168] "Request Body" body=""
	I1210 06:33:43.377746  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.378043  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:43.877664  401365 type.go:168] "Request Body" body=""
	I1210 06:33:43.877741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.878068  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:44.377646  401365 type.go:168] "Request Body" body=""
	I1210 06:33:44.377729  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.378109  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:44.877931  401365 type.go:168] "Request Body" body=""
	I1210 06:33:44.878000  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.878273  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:45.377683  401365 type.go:168] "Request Body" body=""
	I1210 06:33:45.377768  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.378162  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:45.378230  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:45.877646  401365 type.go:168] "Request Body" body=""
	I1210 06:33:45.877726  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.878079  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:46.378365  401365 type.go:168] "Request Body" body=""
	I1210 06:33:46.378443  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.378778  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:46.878592  401365 type.go:168] "Request Body" body=""
	I1210 06:33:46.878667  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.879016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.377612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:47.377693  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.378037  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.878404  401365 type.go:168] "Request Body" body=""
	I1210 06:33:47.878481  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.878791  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:47.878833  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:48.378603  401365 type.go:168] "Request Body" body=""
	I1210 06:33:48.378679  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.379038  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:48.877634  401365 type.go:168] "Request Body" body=""
	I1210 06:33:48.877710  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.878058  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:49.377585  401365 type.go:168] "Request Body" body=""
	I1210 06:33:49.377661  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.377929  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:49.877952  401365 type.go:168] "Request Body" body=""
	I1210 06:33:49.878030  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.878370  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:50.378433  401365 type.go:168] "Request Body" body=""
	I1210 06:33:50.378512  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.378851  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:50.378908  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:50.878409  401365 type.go:168] "Request Body" body=""
	I1210 06:33:50.878484  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.878745  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:51.378528  401365 type.go:168] "Request Body" body=""
	I1210 06:33:51.378608  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.378930  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:51.877680  401365 type.go:168] "Request Body" body=""
	I1210 06:33:51.877772  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.878121  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:33:52.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.378042  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.877736  401365 type.go:168] "Request Body" body=""
	I1210 06:33:52.877859  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.878200  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:52.878263  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:53.377750  401365 type.go:168] "Request Body" body=""
	I1210 06:33:53.377829  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.378164  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:53.878375  401365 type.go:168] "Request Body" body=""
	I1210 06:33:53.878447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.878711  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.378552  401365 type.go:168] "Request Body" body=""
	I1210 06:33:54.378627  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.378978  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.877937  401365 type.go:168] "Request Body" body=""
	I1210 06:33:54.878016  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.878372  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:54.878426  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:55.377557  401365 type.go:168] "Request Body" body=""
	I1210 06:33:55.377627  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.377890  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:55.877581  401365 type.go:168] "Request Body" body=""
	I1210 06:33:55.877661  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.878044  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:56.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:33:56.377687  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.378031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:56.878382  401365 type.go:168] "Request Body" body=""
	I1210 06:33:56.878463  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.878747  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:56.878792  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:57.378563  401365 type.go:168] "Request Body" body=""
	I1210 06:33:57.378655  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.379048  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:57.878429  401365 type.go:168] "Request Body" body=""
	I1210 06:33:57.878505  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.878838  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:58.378383  401365 type.go:168] "Request Body" body=""
	I1210 06:33:58.378457  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.378729  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:58.878537  401365 type.go:168] "Request Body" body=""
	I1210 06:33:58.878615  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.878961  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:58.879020  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:59.377698  401365 type.go:168] "Request Body" body=""
	I1210 06:33:59.377777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.378091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:59.877943  401365 type.go:168] "Request Body" body=""
	I1210 06:33:59.878015  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.878285  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:00.388459  401365 type.go:168] "Request Body" body=""
	I1210 06:34:00.388551  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.388936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:00.877633  401365 type.go:168] "Request Body" body=""
	I1210 06:34:00.877711  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.878072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:01.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:34:01.377692  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.377964  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:01.378006  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:01.877703  401365 type.go:168] "Request Body" body=""
	I1210 06:34:01.877777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.878116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.377805  401365 type.go:168] "Request Body" body=""
	I1210 06:34:02.377886  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.378243  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.877793  401365 type.go:168] "Request Body" body=""
	I1210 06:34:02.877861  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.878116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:03.377724  401365 type.go:168] "Request Body" body=""
	I1210 06:34:03.377819  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.378176  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:03.378248  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:03.877926  401365 type.go:168] "Request Body" body=""
	I1210 06:34:03.877998  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.878340  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.378166  401365 type.go:168] "Request Body" body=""
	I1210 06:34:04.378243  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.378539  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.878398  401365 type.go:168] "Request Body" body=""
	I1210 06:34:04.878479  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.878809  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:05.378551  401365 type.go:168] "Request Body" body=""
	I1210 06:34:05.378630  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.379127  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:05.379181  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:05.877595  401365 type.go:168] "Request Body" body=""
	I1210 06:34:05.877669  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.877928  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.377667  401365 type.go:168] "Request Body" body=""
	I1210 06:34:06.377742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.378094  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.877685  401365 type.go:168] "Request Body" body=""
	I1210 06:34:06.877764  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.878112  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.378383  401365 type.go:168] "Request Body" body=""
	I1210 06:34:07.378451  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.378722  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.878478  401365 type.go:168] "Request Body" body=""
	I1210 06:34:07.878553  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.878918  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:07.878972  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:08.378592  401365 type.go:168] "Request Body" body=""
	I1210 06:34:08.378675  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.379031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:08.877609  401365 type.go:168] "Request Body" body=""
	I1210 06:34:08.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.877968  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.377659  401365 type.go:168] "Request Body" body=""
	I1210 06:34:09.377734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.378072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.877922  401365 type.go:168] "Request Body" body=""
	I1210 06:34:09.878005  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.878441  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:10.378514  401365 type.go:168] "Request Body" body=""
	I1210 06:34:10.378590  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.378890  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:10.378934  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:10.877619  401365 type.go:168] "Request Body" body=""
	I1210 06:34:10.877709  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.878026  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:11.377616  401365 type.go:168] "Request Body" body=""
	I1210 06:34:11.377679  401365 node_ready.go:38] duration metric: took 6m0.000247895s for node "functional-253997" to be "Ready" ...
	I1210 06:34:11.380832  401365 out.go:203] 
	W1210 06:34:11.383623  401365 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 06:34:11.383641  401365 out.go:285] * 
	W1210 06:34:11.385783  401365 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:34:11.388549  401365 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.146353278Z" level=info msg="Using the internal default seccomp profile"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.146361803Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.146367686Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.146373898Z" level=info msg="RDT not available in the host system"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.146390497Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.147142292Z" level=info msg="Conmon does support the --sync option"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.147171528Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.147189308Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.147877119Z" level=info msg="Conmon does support the --sync option"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.147897649Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.148036112Z" level=info msg="Updated default CNI network name to "
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.148588463Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.14893442Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.148991167Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198308631Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198345202Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198393637Z" level=info msg="Create NRI interface"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198494881Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198502668Z" level=info msg="runtime interface created"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198513819Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198519891Z" level=info msg="runtime interface starting up..."
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198525897Z" level=info msg="starting plugins..."
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198538911Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:28:08 functional-253997 crio[6019]: time="2025-12-10T06:28:08.198604963Z" level=info msg="No systemd watchdog enabled"
	Dec 10 06:28:08 functional-253997 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:34:15.916660    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:15.917341    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:15.918581    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:15.919242    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:15.920969    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 03:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015165] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514331] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032961] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807794] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.310854] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 04:51] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000001f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000bd7dad86{9P.session} n=00000000db0bc116
	[  +0.001142] FS-Cache: O-key=[10] '34323936323939323530'
	[  +0.000773] FS-Cache: N-cookie c=00000020 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000bd7dad86{9P.session} n=00000000040fb2e3
	[  +0.001079] FS-Cache: N-key=[10] '34323936323939323530'
	[Dec10 04:56] hrtimer: interrupt took 8286122 ns
	[Dec10 06:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 06:10] overlayfs: idmapped layers are currently not supported
	[ +21.611451] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 06:15] overlayfs: idmapped layers are currently not supported
	[Dec10 06:16] overlayfs: idmapped layers are currently not supported
	[Dec10 06:19] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:34:15 up  3:16,  0 user,  load average: 0.15, 0.26, 0.81
	Linux functional-253997 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:34:13 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:34:14 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 10 06:34:14 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:14 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:14 functional-253997 kubelet[9242]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:14 functional-253997 kubelet[9242]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:14 functional-253997 kubelet[9242]: E1210 06:34:14.202751    9242 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:34:14 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:34:14 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:34:14 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1143.
	Dec 10 06:34:14 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:14 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:14 functional-253997 kubelet[9276]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:14 functional-253997 kubelet[9276]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:14 functional-253997 kubelet[9276]: E1210 06:34:14.906843    9276 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:34:14 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:34:14 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:34:15 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1144.
	Dec 10 06:34:15 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:15 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:15 functional-253997 kubelet[9305]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:15 functional-253997 kubelet[9305]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:15 functional-253997 kubelet[9305]: E1210 06:34:15.689229    9305 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:34:15 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:34:15 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997: exit status 2 (345.694278ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-253997" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (2.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (2.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 kubectl -- --context functional-253997 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 kubectl -- --context functional-253997 get pods: exit status 1 (110.479386ms)

                                                
                                                
** stderr ** 
	E1210 06:34:24.180839  406704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:34:24.181439  406704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:34:24.182396  406704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:34:24.183003  406704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:34:24.184339  406704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-253997 kubectl -- --context functional-253997 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-253997
helpers_test.go:244: (dbg) docker inspect functional-253997:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	        "Created": "2025-12-10T06:19:33.832297734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 395175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:19:33.914699563Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hosts",
	        "LogPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7-json.log",
	        "Name": "/functional-253997",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-253997:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-253997",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	                "LowerDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-253997",
	                "Source": "/var/lib/docker/volumes/functional-253997/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-253997",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-253997",
	                "name.minikube.sigs.k8s.io": "functional-253997",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "484c42edbd556e17e2c5a836571d3275bc9cbd157b70c100e08a5b73322a62e6",
	            "SandboxKey": "/var/run/docker/netns/484c42edbd55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-253997": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ba:52:c5:6e:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fc5aadc020d5981cfdab05726de722ae81d653cbc30b66014568069657370fe7",
	                    "EndpointID": "9ecd4882ea5c9fe5581b68ad15a0a2f450e34777aee426fcc158008c81076e5a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-253997",
	                        "256e059cdf1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997: exit status 2 (306.923005ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-253997 logs -n 25: (1.079164675s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-013831 image ls --format json --alsologtostderr                                                                                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image build -t localhost/my-image:functional-013831 testdata/build --alsologtostderr                                          │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls --format table --alsologtostderr                                                                                     │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls                                                                                                                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ delete         │ -p functional-013831                                                                                                                            │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start          │ -p functional-253997 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ start          │ -p functional-253997 --alsologtostderr -v=8                                                                                                     │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:28 UTC │                     │
	│ cache          │ functional-253997 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache add registry.k8s.io/pause:latest                                                                                        │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache add minikube-local-cache-test:functional-253997                                                                         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache delete minikube-local-cache-test:functional-253997                                                                      │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl images                                                                                                        │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │                     │
	│ cache          │ functional-253997 cache reload                                                                                                                  │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ kubectl        │ functional-253997 kubectl -- --context functional-253997 get pods                                                                               │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:28:04
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:28:04.696682  401365 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:28:04.696859  401365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:28:04.696892  401365 out.go:374] Setting ErrFile to fd 2...
	I1210 06:28:04.696914  401365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:28:04.697215  401365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:28:04.697662  401365 out.go:368] Setting JSON to false
	I1210 06:28:04.698567  401365 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11437,"bootTime":1765336648,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:28:04.698673  401365 start.go:143] virtualization:  
	I1210 06:28:04.702443  401365 out.go:179] * [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:28:04.705481  401365 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:28:04.705615  401365 notify.go:221] Checking for updates...
	I1210 06:28:04.711086  401365 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:28:04.713917  401365 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:04.716867  401365 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:28:04.719925  401365 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:28:04.722835  401365 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:28:04.726336  401365 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:28:04.726469  401365 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:28:04.754166  401365 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:28:04.754279  401365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:28:04.810645  401365 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:28:04.801435563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:28:04.810756  401365 docker.go:319] overlay module found
	I1210 06:28:04.813864  401365 out.go:179] * Using the docker driver based on existing profile
	I1210 06:28:04.816769  401365 start.go:309] selected driver: docker
	I1210 06:28:04.816791  401365 start.go:927] validating driver "docker" against &{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:28:04.816907  401365 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:28:04.817028  401365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:28:04.870143  401365 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:28:04.860525891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:28:04.870593  401365 cni.go:84] Creating CNI manager for ""
	I1210 06:28:04.870644  401365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:28:04.870692  401365 start.go:353] cluster config:
	{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:28:04.873854  401365 out.go:179] * Starting "functional-253997" primary control-plane node in "functional-253997" cluster
	I1210 06:28:04.876935  401365 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:28:04.879860  401365 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:28:04.882747  401365 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:28:04.882931  401365 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:28:04.906679  401365 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:28:04.906698  401365 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:28:04.939349  401365 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 06:28:05.106989  401365 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 06:28:05.107216  401365 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/config.json ...
	I1210 06:28:05.107505  401365 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:28:05.107566  401365 start.go:360] acquireMachinesLock for functional-253997: {Name:mkd4a204596bf14d7530a6c5c103756527acdf26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.107643  401365 start.go:364] duration metric: took 39.278µs to acquireMachinesLock for "functional-253997"
	I1210 06:28:05.107681  401365 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:28:05.107701  401365 fix.go:54] fixHost starting: 
	I1210 06:28:05.107821  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:05.108032  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:05.134635  401365 fix.go:112] recreateIfNeeded on functional-253997: state=Running err=<nil>
	W1210 06:28:05.134664  401365 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:28:05.138161  401365 out.go:252] * Updating the running docker "functional-253997" container ...
	I1210 06:28:05.138204  401365 machine.go:94] provisionDockerMachine start ...
	I1210 06:28:05.138290  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.156912  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:05.157271  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:05.157282  401365 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:28:05.272681  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:05.312543  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:28:05.312568  401365 ubuntu.go:182] provisioning hostname "functional-253997"
	I1210 06:28:05.312643  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.337102  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:05.337416  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:05.337433  401365 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-253997 && echo "functional-253997" | sudo tee /etc/hostname
	I1210 06:28:05.435781  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:05.503700  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:28:05.503808  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.525010  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:05.525371  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:05.525395  401365 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-253997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-253997/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-253997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:28:05.596990  401365 cache.go:107] acquiring lock: {Name:mk9268471d41d450748d4b0133d1a043378776f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597093  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:28:05.597107  401365 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 135.879µs
	I1210 06:28:05.597123  401365 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597148  401365 cache.go:107] acquiring lock: {Name:mk8250dc655f821bf2674ed77a7683798ede4f4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597196  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:28:05.597205  401365 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 71.098µs
	I1210 06:28:05.597212  401365 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597224  401365 cache.go:107] acquiring lock: {Name:mk1c0334e51772e4f6b3429cc05b91ec19d54b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597256  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:28:05.597264  401365 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 41.773µs
	I1210 06:28:05.597271  401365 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597286  401365 cache.go:107] acquiring lock: {Name:mk41aaba55a3296a937cf380d44dc9920df69546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597313  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:28:05.597325  401365 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 45.342µs
	I1210 06:28:05.597331  401365 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597347  401365 cache.go:107] acquiring lock: {Name:mkc2831727d74e5fadb1c00bd89f0236b385200e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597380  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:28:05.597390  401365 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 49.009µs
	I1210 06:28:05.597395  401365 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:28:05.597404  401365 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597432  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:28:05.597441  401365 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 38.597µs
	I1210 06:28:05.597447  401365 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:28:05.597457  401365 cache.go:107] acquiring lock: {Name:mk7e052900e41419dc78d3311f27db508a8f64b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597487  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:28:05.597494  401365 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.163µs
	I1210 06:28:05.597499  401365 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:28:05.597517  401365 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597571  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:28:05.597584  401365 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.023µs
	I1210 06:28:05.597591  401365 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:28:05.597598  401365 cache.go:87] Successfully saved all images to host disk.
	I1210 06:28:05.681682  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:28:05.681708  401365 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-362392/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-362392/.minikube}
	I1210 06:28:05.681741  401365 ubuntu.go:190] setting up certificates
	I1210 06:28:05.681752  401365 provision.go:84] configureAuth start
	I1210 06:28:05.681819  401365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:28:05.699808  401365 provision.go:143] copyHostCerts
	I1210 06:28:05.699863  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 06:28:05.699905  401365 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem, removing ...
	I1210 06:28:05.699919  401365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 06:28:05.699992  401365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem (1078 bytes)
	I1210 06:28:05.700081  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 06:28:05.700104  401365 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem, removing ...
	I1210 06:28:05.700113  401365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 06:28:05.700142  401365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem (1123 bytes)
	I1210 06:28:05.700188  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 06:28:05.700207  401365 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem, removing ...
	I1210 06:28:05.700218  401365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 06:28:05.700242  401365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem (1675 bytes)
	I1210 06:28:05.700300  401365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem org=jenkins.functional-253997 san=[127.0.0.1 192.168.49.2 functional-253997 localhost minikube]
	I1210 06:28:05.936274  401365 provision.go:177] copyRemoteCerts
	I1210 06:28:05.936350  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:28:05.936418  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.954560  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.065031  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 06:28:06.065092  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:28:06.082556  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 06:28:06.082620  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:28:06.101057  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 06:28:06.101135  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:28:06.119676  401365 provision.go:87] duration metric: took 437.892883ms to configureAuth
	I1210 06:28:06.119777  401365 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:28:06.119980  401365 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:28:06.120085  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.137920  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:06.138235  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:06.138256  401365 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:28:06.452845  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:28:06.452929  401365 machine.go:97] duration metric: took 1.314715304s to provisionDockerMachine
	I1210 06:28:06.452956  401365 start.go:293] postStartSetup for "functional-253997" (driver="docker")
	I1210 06:28:06.452990  401365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:28:06.453063  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:28:06.453144  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.470784  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.577269  401365 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:28:06.580692  401365 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 06:28:06.580715  401365 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 06:28:06.580720  401365 command_runner.go:130] > VERSION_ID="12"
	I1210 06:28:06.580725  401365 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 06:28:06.580730  401365 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 06:28:06.580768  401365 command_runner.go:130] > ID=debian
	I1210 06:28:06.580780  401365 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 06:28:06.580785  401365 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 06:28:06.580791  401365 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 06:28:06.580887  401365 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:28:06.580933  401365 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:28:06.580952  401365 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/addons for local assets ...
	I1210 06:28:06.581012  401365 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/files for local assets ...
	I1210 06:28:06.581098  401365 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> 3642652.pem in /etc/ssl/certs
	I1210 06:28:06.581111  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> /etc/ssl/certs/3642652.pem
	I1210 06:28:06.581203  401365 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts -> hosts in /etc/test/nested/copy/364265
	I1210 06:28:06.581211  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts -> /etc/test/nested/copy/364265/hosts
	I1210 06:28:06.581307  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364265
	I1210 06:28:06.588834  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:28:06.607350  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts --> /etc/test/nested/copy/364265/hosts (40 bytes)
	I1210 06:28:06.625111  401365 start.go:296] duration metric: took 172.118023ms for postStartSetup
	I1210 06:28:06.625251  401365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:28:06.625310  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.643314  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.746089  401365 command_runner.go:130] > 11%
	I1210 06:28:06.746641  401365 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:28:06.751190  401365 command_runner.go:130] > 174G
	I1210 06:28:06.751596  401365 fix.go:56] duration metric: took 1.643890859s for fixHost
	I1210 06:28:06.751620  401365 start.go:83] releasing machines lock for "functional-253997", held for 1.643948944s
	I1210 06:28:06.751695  401365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:28:06.769599  401365 ssh_runner.go:195] Run: cat /version.json
	I1210 06:28:06.769653  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.769923  401365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:28:06.769973  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.794205  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.801527  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.995023  401365 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 06:28:06.995129  401365 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1210 06:28:06.995269  401365 ssh_runner.go:195] Run: systemctl --version
	I1210 06:28:07.001581  401365 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 06:28:07.001629  401365 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 06:28:07.002099  401365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:28:07.048284  401365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 06:28:07.052994  401365 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 06:28:07.053661  401365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:28:07.053769  401365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:28:07.062754  401365 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:28:07.062818  401365 start.go:496] detecting cgroup driver to use...
	I1210 06:28:07.062869  401365 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:28:07.062946  401365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:28:07.079107  401365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:28:07.094803  401365 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:28:07.094958  401365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:28:07.114470  401365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:28:07.128193  401365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:28:07.258424  401365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:28:07.374265  401365 docker.go:234] disabling docker service ...
	I1210 06:28:07.374339  401365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:28:07.389285  401365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:28:07.403201  401365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:28:07.521904  401365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:28:07.641023  401365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:28:07.653771  401365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:28:07.666535  401365 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1210 06:28:07.667719  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:07.817082  401365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:28:07.817158  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.826426  401365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:28:07.826509  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.835611  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.844530  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.853511  401365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:28:07.861378  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.870726  401365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.879012  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.888039  401365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:28:07.894740  401365 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 06:28:07.895767  401365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:28:07.903878  401365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:28:08.028500  401365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:28:08.203883  401365 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:28:08.204004  401365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:28:08.207826  401365 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1210 06:28:08.207850  401365 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 06:28:08.207858  401365 command_runner.go:130] > Device: 0,72	Inode: 1753        Links: 1
	I1210 06:28:08.207864  401365 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:28:08.207869  401365 command_runner.go:130] > Access: 2025-12-10 06:28:08.143329752 +0000
	I1210 06:28:08.207875  401365 command_runner.go:130] > Modify: 2025-12-10 06:28:08.143329752 +0000
	I1210 06:28:08.207879  401365 command_runner.go:130] > Change: 2025-12-10 06:28:08.143329752 +0000
	I1210 06:28:08.207883  401365 command_runner.go:130] >  Birth: -
	I1210 06:28:08.207920  401365 start.go:564] Will wait 60s for crictl version
	I1210 06:28:08.207972  401365 ssh_runner.go:195] Run: which crictl
	I1210 06:28:08.211603  401365 command_runner.go:130] > /usr/local/bin/crictl
	I1210 06:28:08.211673  401365 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:28:08.233344  401365 command_runner.go:130] > Version:  0.1.0
	I1210 06:28:08.233366  401365 command_runner.go:130] > RuntimeName:  cri-o
	I1210 06:28:08.233371  401365 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1210 06:28:08.233486  401365 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 06:28:08.235784  401365 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:28:08.235868  401365 ssh_runner.go:195] Run: crio --version
	I1210 06:28:08.263554  401365 command_runner.go:130] > crio version 1.34.3
	I1210 06:28:08.263582  401365 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 06:28:08.263590  401365 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 06:28:08.263598  401365 command_runner.go:130] >    GitTreeState:   dirty
	I1210 06:28:08.263603  401365 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 06:28:08.263609  401365 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 06:28:08.263614  401365 command_runner.go:130] >    Compiler:       gc
	I1210 06:28:08.263618  401365 command_runner.go:130] >    Platform:       linux/arm64
	I1210 06:28:08.263625  401365 command_runner.go:130] >    Linkmode:       static
	I1210 06:28:08.263631  401365 command_runner.go:130] >    BuildTags:
	I1210 06:28:08.263635  401365 command_runner.go:130] >      static
	I1210 06:28:08.263641  401365 command_runner.go:130] >      netgo
	I1210 06:28:08.263644  401365 command_runner.go:130] >      osusergo
	I1210 06:28:08.263649  401365 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 06:28:08.263658  401365 command_runner.go:130] >      seccomp
	I1210 06:28:08.263662  401365 command_runner.go:130] >      apparmor
	I1210 06:28:08.263665  401365 command_runner.go:130] >      selinux
	I1210 06:28:08.263673  401365 command_runner.go:130] >    LDFlags:          unknown
	I1210 06:28:08.263678  401365 command_runner.go:130] >    SeccompEnabled:   true
	I1210 06:28:08.263686  401365 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 06:28:08.265277  401365 ssh_runner.go:195] Run: crio --version
	I1210 06:28:08.292854  401365 command_runner.go:130] > crio version 1.34.3
	I1210 06:28:08.292877  401365 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 06:28:08.292884  401365 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 06:28:08.292894  401365 command_runner.go:130] >    GitTreeState:   dirty
	I1210 06:28:08.292899  401365 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 06:28:08.292903  401365 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 06:28:08.292909  401365 command_runner.go:130] >    Compiler:       gc
	I1210 06:28:08.292914  401365 command_runner.go:130] >    Platform:       linux/arm64
	I1210 06:28:08.292918  401365 command_runner.go:130] >    Linkmode:       static
	I1210 06:28:08.292921  401365 command_runner.go:130] >    BuildTags:
	I1210 06:28:08.292925  401365 command_runner.go:130] >      static
	I1210 06:28:08.292929  401365 command_runner.go:130] >      netgo
	I1210 06:28:08.292932  401365 command_runner.go:130] >      osusergo
	I1210 06:28:08.292936  401365 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 06:28:08.292939  401365 command_runner.go:130] >      seccomp
	I1210 06:28:08.292943  401365 command_runner.go:130] >      apparmor
	I1210 06:28:08.292947  401365 command_runner.go:130] >      selinux
	I1210 06:28:08.292951  401365 command_runner.go:130] >    LDFlags:          unknown
	I1210 06:28:08.292955  401365 command_runner.go:130] >    SeccompEnabled:   true
	I1210 06:28:08.292959  401365 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 06:28:08.297960  401365 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 06:28:08.300955  401365 cli_runner.go:164] Run: docker network inspect functional-253997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:28:08.316701  401365 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:28:08.320890  401365 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 06:28:08.321107  401365 kubeadm.go:884] updating cluster {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:28:08.321383  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:08.467539  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:08.630219  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:08.778675  401365 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:28:08.778770  401365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:28:08.809702  401365 command_runner.go:130] > {
	I1210 06:28:08.809721  401365 command_runner.go:130] >   "images":  [
	I1210 06:28:08.809725  401365 command_runner.go:130] >     {
	I1210 06:28:08.809734  401365 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1210 06:28:08.809739  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809744  401365 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 06:28:08.809748  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809753  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809762  401365 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1210 06:28:08.809765  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809770  401365 command_runner.go:130] >       "size":  "29035622",
	I1210 06:28:08.809784  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.809789  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809792  401365 command_runner.go:130] >     },
	I1210 06:28:08.809795  401365 command_runner.go:130] >     {
	I1210 06:28:08.809802  401365 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 06:28:08.809806  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809812  401365 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 06:28:08.809815  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809819  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809827  401365 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1210 06:28:08.809830  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809834  401365 command_runner.go:130] >       "size":  "74488375",
	I1210 06:28:08.809839  401365 command_runner.go:130] >       "username":  "nonroot",
	I1210 06:28:08.809843  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809846  401365 command_runner.go:130] >     },
	I1210 06:28:08.809850  401365 command_runner.go:130] >     {
	I1210 06:28:08.809856  401365 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1210 06:28:08.809860  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809865  401365 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1210 06:28:08.809868  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809872  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809882  401365 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:c711b5adb8fed0c7e2a62a6c327fd0f8486f90b93c1ebd0ba1e790b930373aae"
	I1210 06:28:08.809885  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809889  401365 command_runner.go:130] >       "size":  "60849030",
	I1210 06:28:08.809893  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.809897  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.809900  401365 command_runner.go:130] >       },
	I1210 06:28:08.809904  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.809908  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809911  401365 command_runner.go:130] >     },
	I1210 06:28:08.809915  401365 command_runner.go:130] >     {
	I1210 06:28:08.809921  401365 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1210 06:28:08.809925  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809934  401365 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1210 06:28:08.809938  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809941  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809949  401365 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:112cff968330968b8d8cc75dd9f232b5b048067a2151629e56b80ff7af621b72"
	I1210 06:28:08.809954  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809958  401365 command_runner.go:130] >       "size":  "85012778",
	I1210 06:28:08.809961  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.809965  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.809968  401365 command_runner.go:130] >       },
	I1210 06:28:08.809973  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.809977  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809980  401365 command_runner.go:130] >     },
	I1210 06:28:08.809983  401365 command_runner.go:130] >     {
	I1210 06:28:08.809989  401365 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1210 06:28:08.809994  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809999  401365 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1210 06:28:08.810002  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810006  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810014  401365 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:89a8e28214a1c4b4631281e37feaf54299e7510d43c5445d4adc88575390f71e"
	I1210 06:28:08.810017  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810021  401365 command_runner.go:130] >       "size":  "72167568",
	I1210 06:28:08.810030  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.810035  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.810038  401365 command_runner.go:130] >       },
	I1210 06:28:08.810042  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810046  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.810049  401365 command_runner.go:130] >     },
	I1210 06:28:08.810052  401365 command_runner.go:130] >     {
	I1210 06:28:08.810058  401365 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1210 06:28:08.810062  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.810068  401365 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1210 06:28:08.810072  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810076  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810086  401365 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:01f8409e5c04cbb256f29dc118ff184a24b9cbb97c7f6d3bd462333b366566ca"
	I1210 06:28:08.810089  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810093  401365 command_runner.go:130] >       "size":  "74105636",
	I1210 06:28:08.810097  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810101  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.810104  401365 command_runner.go:130] >     },
	I1210 06:28:08.810107  401365 command_runner.go:130] >     {
	I1210 06:28:08.810114  401365 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1210 06:28:08.810117  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.810127  401365 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1210 06:28:08.810131  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810134  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810144  401365 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9a2a4c91f5fdaf1a3c5e839f425cc7461b9e24d5f0816c207638df25ade3eda9"
	I1210 06:28:08.810147  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810151  401365 command_runner.go:130] >       "size":  "49819792",
	I1210 06:28:08.810154  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.810158  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.810160  401365 command_runner.go:130] >       },
	I1210 06:28:08.810165  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810169  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.810172  401365 command_runner.go:130] >     },
	I1210 06:28:08.810175  401365 command_runner.go:130] >     {
	I1210 06:28:08.810181  401365 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 06:28:08.810185  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.810189  401365 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 06:28:08.810192  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810196  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810203  401365 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1210 06:28:08.810206  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810210  401365 command_runner.go:130] >       "size":  "517328",
	I1210 06:28:08.810213  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.810217  401365 command_runner.go:130] >         "value":  "65535"
	I1210 06:28:08.810220  401365 command_runner.go:130] >       },
	I1210 06:28:08.810228  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810232  401365 command_runner.go:130] >       "pinned":  true
	I1210 06:28:08.810234  401365 command_runner.go:130] >     }
	I1210 06:28:08.810237  401365 command_runner.go:130] >   ]
	I1210 06:28:08.810240  401365 command_runner.go:130] > }
	I1210 06:28:08.812152  401365 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:28:08.812177  401365 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:28:08.812185  401365 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1210 06:28:08.812284  401365 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-253997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:28:08.812367  401365 ssh_runner.go:195] Run: crio config
	I1210 06:28:08.860605  401365 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1210 06:28:08.860628  401365 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1210 06:28:08.860635  401365 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1210 06:28:08.860638  401365 command_runner.go:130] > #
	I1210 06:28:08.860654  401365 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1210 06:28:08.860661  401365 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1210 06:28:08.860668  401365 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1210 06:28:08.860677  401365 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1210 06:28:08.860680  401365 command_runner.go:130] > # reload'.
	I1210 06:28:08.860687  401365 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1210 06:28:08.860694  401365 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1210 06:28:08.860700  401365 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1210 06:28:08.860706  401365 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1210 06:28:08.860709  401365 command_runner.go:130] > [crio]
	I1210 06:28:08.860716  401365 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1210 06:28:08.860721  401365 command_runner.go:130] > # containers images, in this directory.
	I1210 06:28:08.860730  401365 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1210 06:28:08.860737  401365 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1210 06:28:08.860742  401365 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1210 06:28:08.860760  401365 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1210 06:28:08.860811  401365 command_runner.go:130] > # imagestore = ""
	I1210 06:28:08.860819  401365 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1210 06:28:08.860826  401365 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1210 06:28:08.860837  401365 command_runner.go:130] > # storage_driver = "overlay"
	I1210 06:28:08.860843  401365 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1210 06:28:08.860850  401365 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1210 06:28:08.860853  401365 command_runner.go:130] > # storage_option = [
	I1210 06:28:08.860857  401365 command_runner.go:130] > # ]
	I1210 06:28:08.860864  401365 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1210 06:28:08.860870  401365 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1210 06:28:08.860874  401365 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1210 06:28:08.860880  401365 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1210 06:28:08.860886  401365 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1210 06:28:08.860890  401365 command_runner.go:130] > # always happen on a node reboot
	I1210 06:28:08.860894  401365 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1210 06:28:08.860905  401365 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1210 06:28:08.860911  401365 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1210 06:28:08.860918  401365 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1210 06:28:08.860922  401365 command_runner.go:130] > # version_file_persist = ""
	I1210 06:28:08.860930  401365 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1210 06:28:08.860938  401365 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1210 06:28:08.860941  401365 command_runner.go:130] > # internal_wipe = true
	I1210 06:28:08.860950  401365 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1210 06:28:08.860955  401365 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1210 06:28:08.860959  401365 command_runner.go:130] > # internal_repair = true
	I1210 06:28:08.860964  401365 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1210 06:28:08.860971  401365 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1210 06:28:08.860976  401365 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1210 06:28:08.860981  401365 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1210 06:28:08.860987  401365 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1210 06:28:08.860991  401365 command_runner.go:130] > [crio.api]
	I1210 06:28:08.860997  401365 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1210 06:28:08.861001  401365 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1210 06:28:08.861006  401365 command_runner.go:130] > # IP address on which the stream server will listen.
	I1210 06:28:08.861010  401365 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1210 06:28:08.861017  401365 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1210 06:28:08.861026  401365 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1210 06:28:08.861030  401365 command_runner.go:130] > # stream_port = "0"
	I1210 06:28:08.861035  401365 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1210 06:28:08.861040  401365 command_runner.go:130] > # stream_enable_tls = false
	I1210 06:28:08.861046  401365 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1210 06:28:08.861050  401365 command_runner.go:130] > # stream_idle_timeout = ""
	I1210 06:28:08.861056  401365 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1210 06:28:08.861062  401365 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1210 06:28:08.861066  401365 command_runner.go:130] > # stream_tls_cert = ""
	I1210 06:28:08.861072  401365 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1210 06:28:08.861077  401365 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1210 06:28:08.861081  401365 command_runner.go:130] > # stream_tls_key = ""
	I1210 06:28:08.861087  401365 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1210 06:28:08.861093  401365 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1210 06:28:08.861097  401365 command_runner.go:130] > # automatically pick up the changes.
	I1210 06:28:08.861446  401365 command_runner.go:130] > # stream_tls_ca = ""
	I1210 06:28:08.861478  401365 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 06:28:08.861569  401365 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1210 06:28:08.861581  401365 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 06:28:08.861586  401365 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1210 06:28:08.861593  401365 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1210 06:28:08.861599  401365 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1210 06:28:08.861602  401365 command_runner.go:130] > [crio.runtime]
	I1210 06:28:08.861609  401365 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1210 06:28:08.861614  401365 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1210 06:28:08.861628  401365 command_runner.go:130] > # "nofile=1024:2048"
	I1210 06:28:08.861634  401365 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1210 06:28:08.861638  401365 command_runner.go:130] > # default_ulimits = [
	I1210 06:28:08.861653  401365 command_runner.go:130] > # ]
	I1210 06:28:08.861660  401365 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1210 06:28:08.861663  401365 command_runner.go:130] > # no_pivot = false
	I1210 06:28:08.861669  401365 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1210 06:28:08.861675  401365 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1210 06:28:08.861681  401365 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1210 06:28:08.861687  401365 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1210 06:28:08.861696  401365 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1210 06:28:08.861703  401365 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 06:28:08.861707  401365 command_runner.go:130] > # conmon = ""
	I1210 06:28:08.861711  401365 command_runner.go:130] > # Cgroup setting for conmon
	I1210 06:28:08.861718  401365 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1210 06:28:08.861722  401365 command_runner.go:130] > conmon_cgroup = "pod"
	I1210 06:28:08.861728  401365 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1210 06:28:08.861733  401365 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1210 06:28:08.861740  401365 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 06:28:08.861744  401365 command_runner.go:130] > # conmon_env = [
	I1210 06:28:08.861747  401365 command_runner.go:130] > # ]
	I1210 06:28:08.861753  401365 command_runner.go:130] > # Additional environment variables to set for all the
	I1210 06:28:08.861758  401365 command_runner.go:130] > # containers. These are overridden if set in the
	I1210 06:28:08.861764  401365 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1210 06:28:08.861768  401365 command_runner.go:130] > # default_env = [
	I1210 06:28:08.861771  401365 command_runner.go:130] > # ]
	I1210 06:28:08.861787  401365 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1210 06:28:08.861795  401365 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1210 06:28:08.861799  401365 command_runner.go:130] > # selinux = false
	I1210 06:28:08.861809  401365 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1210 06:28:08.861817  401365 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1210 06:28:08.861823  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862101  401365 command_runner.go:130] > # seccomp_profile = ""
	I1210 06:28:08.862113  401365 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1210 06:28:08.862119  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862201  401365 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1210 06:28:08.862211  401365 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1210 06:28:08.862225  401365 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1210 06:28:08.862232  401365 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1210 06:28:08.862239  401365 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1210 06:28:08.862244  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862248  401365 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1210 06:28:08.862254  401365 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1210 06:28:08.862259  401365 command_runner.go:130] > # the cgroup blockio controller.
	I1210 06:28:08.862263  401365 command_runner.go:130] > # blockio_config_file = ""
	I1210 06:28:08.862273  401365 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1210 06:28:08.862283  401365 command_runner.go:130] > # blockio parameters.
	I1210 06:28:08.862294  401365 command_runner.go:130] > # blockio_reload = false
	I1210 06:28:08.862301  401365 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1210 06:28:08.862304  401365 command_runner.go:130] > # irqbalance daemon.
	I1210 06:28:08.862310  401365 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1210 06:28:08.862316  401365 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1210 06:28:08.862323  401365 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1210 06:28:08.862330  401365 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1210 06:28:08.862336  401365 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1210 06:28:08.862342  401365 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1210 06:28:08.862347  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862351  401365 command_runner.go:130] > # rdt_config_file = ""
	I1210 06:28:08.862356  401365 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1210 06:28:08.862384  401365 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1210 06:28:08.862391  401365 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1210 06:28:08.862666  401365 command_runner.go:130] > # separate_pull_cgroup = ""
	I1210 06:28:08.862678  401365 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1210 06:28:08.862685  401365 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1210 06:28:08.862689  401365 command_runner.go:130] > # will be added.
	I1210 06:28:08.862693  401365 command_runner.go:130] > # default_capabilities = [
	I1210 06:28:08.862777  401365 command_runner.go:130] > # 	"CHOWN",
	I1210 06:28:08.862786  401365 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1210 06:28:08.862797  401365 command_runner.go:130] > # 	"FSETID",
	I1210 06:28:08.862802  401365 command_runner.go:130] > # 	"FOWNER",
	I1210 06:28:08.862806  401365 command_runner.go:130] > # 	"SETGID",
	I1210 06:28:08.862809  401365 command_runner.go:130] > # 	"SETUID",
	I1210 06:28:08.862838  401365 command_runner.go:130] > # 	"SETPCAP",
	I1210 06:28:08.862844  401365 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1210 06:28:08.862847  401365 command_runner.go:130] > # 	"KILL",
	I1210 06:28:08.862850  401365 command_runner.go:130] > # ]
	I1210 06:28:08.862858  401365 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1210 06:28:08.862865  401365 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1210 06:28:08.863095  401365 command_runner.go:130] > # add_inheritable_capabilities = false
	I1210 06:28:08.863106  401365 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1210 06:28:08.863112  401365 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 06:28:08.863116  401365 command_runner.go:130] > default_sysctls = [
	I1210 06:28:08.863203  401365 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1210 06:28:08.863243  401365 command_runner.go:130] > ]
	I1210 06:28:08.863252  401365 command_runner.go:130] > # List of devices on the host that a
	I1210 06:28:08.863259  401365 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1210 06:28:08.863263  401365 command_runner.go:130] > # allowed_devices = [
	I1210 06:28:08.863314  401365 command_runner.go:130] > # 	"/dev/fuse",
	I1210 06:28:08.863326  401365 command_runner.go:130] > # 	"/dev/net/tun",
	I1210 06:28:08.863333  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863338  401365 command_runner.go:130] > # List of additional devices. specified as
	I1210 06:28:08.863345  401365 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1210 06:28:08.863351  401365 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1210 06:28:08.863357  401365 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 06:28:08.863361  401365 command_runner.go:130] > # additional_devices = [
	I1210 06:28:08.863363  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863368  401365 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1210 06:28:08.863372  401365 command_runner.go:130] > # cdi_spec_dirs = [
	I1210 06:28:08.863376  401365 command_runner.go:130] > # 	"/etc/cdi",
	I1210 06:28:08.863379  401365 command_runner.go:130] > # 	"/var/run/cdi",
	I1210 06:28:08.863382  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863388  401365 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1210 06:28:08.863394  401365 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1210 06:28:08.863398  401365 command_runner.go:130] > # Defaults to false.
	I1210 06:28:08.863403  401365 command_runner.go:130] > # device_ownership_from_security_context = false
	I1210 06:28:08.863410  401365 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1210 06:28:08.863415  401365 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1210 06:28:08.863419  401365 command_runner.go:130] > # hooks_dir = [
	I1210 06:28:08.863604  401365 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1210 06:28:08.863612  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863618  401365 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1210 06:28:08.863625  401365 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1210 06:28:08.863630  401365 command_runner.go:130] > # its default mounts from the following two files:
	I1210 06:28:08.863633  401365 command_runner.go:130] > #
	I1210 06:28:08.863640  401365 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1210 06:28:08.863646  401365 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1210 06:28:08.863652  401365 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1210 06:28:08.863655  401365 command_runner.go:130] > #
	I1210 06:28:08.863661  401365 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1210 06:28:08.863676  401365 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1210 06:28:08.863683  401365 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1210 06:28:08.863687  401365 command_runner.go:130] > #      only add mounts it finds in this file.
	I1210 06:28:08.863690  401365 command_runner.go:130] > #
	I1210 06:28:08.863719  401365 command_runner.go:130] > # default_mounts_file = ""
	I1210 06:28:08.863725  401365 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1210 06:28:08.863732  401365 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1210 06:28:08.863736  401365 command_runner.go:130] > # pids_limit = -1
	I1210 06:28:08.863742  401365 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1210 06:28:08.863748  401365 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1210 06:28:08.863761  401365 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1210 06:28:08.863771  401365 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1210 06:28:08.863775  401365 command_runner.go:130] > # log_size_max = -1
	I1210 06:28:08.863782  401365 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1210 06:28:08.863786  401365 command_runner.go:130] > # log_to_journald = false
	I1210 06:28:08.863792  401365 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1210 06:28:08.863974  401365 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1210 06:28:08.863984  401365 command_runner.go:130] > # Path to directory for container attach sockets.
	I1210 06:28:08.863990  401365 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1210 06:28:08.863996  401365 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1210 06:28:08.864082  401365 command_runner.go:130] > # bind_mount_prefix = ""
	I1210 06:28:08.864098  401365 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1210 06:28:08.864139  401365 command_runner.go:130] > # read_only = false
	I1210 06:28:08.864149  401365 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1210 06:28:08.864156  401365 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1210 06:28:08.864159  401365 command_runner.go:130] > # live configuration reload.
	I1210 06:28:08.864163  401365 command_runner.go:130] > # log_level = "info"
	I1210 06:28:08.864169  401365 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1210 06:28:08.864174  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.864178  401365 command_runner.go:130] > # log_filter = ""
	I1210 06:28:08.864183  401365 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1210 06:28:08.864190  401365 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1210 06:28:08.864193  401365 command_runner.go:130] > # separated by comma.
	I1210 06:28:08.864208  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864211  401365 command_runner.go:130] > # uid_mappings = ""
	I1210 06:28:08.864218  401365 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1210 06:28:08.864224  401365 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1210 06:28:08.864228  401365 command_runner.go:130] > # separated by comma.
	I1210 06:28:08.864236  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864440  401365 command_runner.go:130] > # gid_mappings = ""
	I1210 06:28:08.864451  401365 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1210 06:28:08.864458  401365 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 06:28:08.864465  401365 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 06:28:08.864473  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864477  401365 command_runner.go:130] > # minimum_mappable_uid = -1
	I1210 06:28:08.864483  401365 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1210 06:28:08.864493  401365 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 06:28:08.864501  401365 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 06:28:08.864514  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864541  401365 command_runner.go:130] > # minimum_mappable_gid = -1
	I1210 06:28:08.864548  401365 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1210 06:28:08.864555  401365 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1210 06:28:08.864560  401365 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1210 06:28:08.864572  401365 command_runner.go:130] > # ctr_stop_timeout = 30
	I1210 06:28:08.864578  401365 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1210 06:28:08.864588  401365 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1210 06:28:08.864593  401365 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1210 06:28:08.864598  401365 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1210 06:28:08.864602  401365 command_runner.go:130] > # drop_infra_ctr = true
	I1210 06:28:08.864608  401365 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1210 06:28:08.864613  401365 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1210 06:28:08.864621  401365 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1210 06:28:08.864625  401365 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1210 06:28:08.864632  401365 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1210 06:28:08.864638  401365 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1210 06:28:08.864644  401365 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1210 06:28:08.864649  401365 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1210 06:28:08.864653  401365 command_runner.go:130] > # shared_cpuset = ""
	I1210 06:28:08.864659  401365 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1210 06:28:08.864664  401365 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1210 06:28:08.864668  401365 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1210 06:28:08.864675  401365 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1210 06:28:08.864858  401365 command_runner.go:130] > # pinns_path = ""
	I1210 06:28:08.864869  401365 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1210 06:28:08.864876  401365 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1210 06:28:08.864881  401365 command_runner.go:130] > # enable_criu_support = true
	I1210 06:28:08.864886  401365 command_runner.go:130] > # Enable/disable the generation of the container,
	I1210 06:28:08.864892  401365 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1210 06:28:08.864935  401365 command_runner.go:130] > # enable_pod_events = false
	I1210 06:28:08.864946  401365 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1210 06:28:08.864960  401365 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1210 06:28:08.865092  401365 command_runner.go:130] > # default_runtime = "crun"
	I1210 06:28:08.865104  401365 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1210 06:28:08.865112  401365 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1210 06:28:08.865122  401365 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1210 06:28:08.865127  401365 command_runner.go:130] > # creation as a file is not desired either.
	I1210 06:28:08.865136  401365 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1210 06:28:08.865141  401365 command_runner.go:130] > # the hostname is being managed dynamically.
	I1210 06:28:08.865146  401365 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1210 06:28:08.865148  401365 command_runner.go:130] > # ]
	I1210 06:28:08.865158  401365 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1210 06:28:08.865165  401365 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1210 06:28:08.865171  401365 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1210 06:28:08.865177  401365 command_runner.go:130] > # Each entry in the table should follow the format:
	I1210 06:28:08.865179  401365 command_runner.go:130] > #
	I1210 06:28:08.865200  401365 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1210 06:28:08.865207  401365 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1210 06:28:08.865210  401365 command_runner.go:130] > # runtime_type = "oci"
	I1210 06:28:08.865215  401365 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1210 06:28:08.865219  401365 command_runner.go:130] > # inherit_default_runtime = false
	I1210 06:28:08.865224  401365 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1210 06:28:08.865229  401365 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1210 06:28:08.865233  401365 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1210 06:28:08.865236  401365 command_runner.go:130] > # monitor_env = []
	I1210 06:28:08.865241  401365 command_runner.go:130] > # privileged_without_host_devices = false
	I1210 06:28:08.865245  401365 command_runner.go:130] > # allowed_annotations = []
	I1210 06:28:08.865250  401365 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1210 06:28:08.865253  401365 command_runner.go:130] > # no_sync_log = false
	I1210 06:28:08.865257  401365 command_runner.go:130] > # default_annotations = {}
	I1210 06:28:08.865261  401365 command_runner.go:130] > # stream_websockets = false
	I1210 06:28:08.865265  401365 command_runner.go:130] > # seccomp_profile = ""
	I1210 06:28:08.865296  401365 command_runner.go:130] > # Where:
	I1210 06:28:08.865301  401365 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1210 06:28:08.865308  401365 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1210 06:28:08.865314  401365 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1210 06:28:08.865320  401365 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1210 06:28:08.865323  401365 command_runner.go:130] > #   in $PATH.
	I1210 06:28:08.865330  401365 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1210 06:28:08.865334  401365 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1210 06:28:08.865341  401365 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1210 06:28:08.865344  401365 command_runner.go:130] > #   state.
	I1210 06:28:08.865352  401365 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1210 06:28:08.865360  401365 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1210 06:28:08.865368  401365 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1210 06:28:08.865376  401365 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1210 06:28:08.865381  401365 command_runner.go:130] > #   the values from the default runtime on load time.
	I1210 06:28:08.865387  401365 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1210 06:28:08.865392  401365 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1210 06:28:08.865399  401365 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1210 06:28:08.865406  401365 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1210 06:28:08.865411  401365 command_runner.go:130] > #   The currently recognized values are:
	I1210 06:28:08.865417  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1210 06:28:08.865425  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1210 06:28:08.865431  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1210 06:28:08.865437  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1210 06:28:08.865444  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1210 06:28:08.865451  401365 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1210 06:28:08.865458  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1210 06:28:08.865464  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1210 06:28:08.865470  401365 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1210 06:28:08.865492  401365 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1210 06:28:08.865501  401365 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1210 06:28:08.865507  401365 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1210 06:28:08.865513  401365 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1210 06:28:08.865519  401365 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1210 06:28:08.865525  401365 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1210 06:28:08.865533  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1210 06:28:08.865539  401365 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1210 06:28:08.865552  401365 command_runner.go:130] > #   deprecated option "conmon".
	I1210 06:28:08.865560  401365 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1210 06:28:08.865565  401365 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1210 06:28:08.865572  401365 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1210 06:28:08.865578  401365 command_runner.go:130] > #   should be moved to the container's cgroup
	I1210 06:28:08.865587  401365 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1210 06:28:08.865592  401365 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1210 06:28:08.865599  401365 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1210 06:28:08.865607  401365 command_runner.go:130] > #   conmon-rs by using:
	I1210 06:28:08.865615  401365 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1210 06:28:08.865622  401365 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1210 06:28:08.865630  401365 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1210 06:28:08.865636  401365 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1210 06:28:08.865642  401365 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1210 06:28:08.865649  401365 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1210 06:28:08.865657  401365 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1210 06:28:08.865661  401365 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1210 06:28:08.865669  401365 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1210 06:28:08.865677  401365 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1210 06:28:08.865685  401365 command_runner.go:130] > #   when a machine crash happens.
	I1210 06:28:08.865693  401365 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1210 06:28:08.865700  401365 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1210 06:28:08.865708  401365 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1210 06:28:08.865713  401365 command_runner.go:130] > #   seccomp profile for the runtime.
	I1210 06:28:08.865719  401365 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1210 06:28:08.865744  401365 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1210 06:28:08.865747  401365 command_runner.go:130] > #
	I1210 06:28:08.865751  401365 command_runner.go:130] > # Using the seccomp notifier feature:
	I1210 06:28:08.865754  401365 command_runner.go:130] > #
	I1210 06:28:08.865762  401365 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1210 06:28:08.865768  401365 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1210 06:28:08.865771  401365 command_runner.go:130] > #
	I1210 06:28:08.865777  401365 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1210 06:28:08.865783  401365 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1210 06:28:08.865785  401365 command_runner.go:130] > #
	I1210 06:28:08.865793  401365 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1210 06:28:08.865797  401365 command_runner.go:130] > # feature.
	I1210 06:28:08.865800  401365 command_runner.go:130] > #
	I1210 06:28:08.865807  401365 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1210 06:28:08.865813  401365 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1210 06:28:08.865819  401365 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1210 06:28:08.865832  401365 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1210 06:28:08.865838  401365 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1210 06:28:08.865841  401365 command_runner.go:130] > #
	I1210 06:28:08.865847  401365 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1210 06:28:08.865853  401365 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1210 06:28:08.865856  401365 command_runner.go:130] > #
	I1210 06:28:08.865862  401365 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1210 06:28:08.865870  401365 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1210 06:28:08.865873  401365 command_runner.go:130] > #
	I1210 06:28:08.865880  401365 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1210 06:28:08.865885  401365 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1210 06:28:08.865889  401365 command_runner.go:130] > # limitation.
	I1210 06:28:08.865905  401365 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1210 06:28:08.866331  401365 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1210 06:28:08.866426  401365 command_runner.go:130] > runtime_type = ""
	I1210 06:28:08.866446  401365 command_runner.go:130] > runtime_root = "/run/crun"
	I1210 06:28:08.866464  401365 command_runner.go:130] > inherit_default_runtime = false
	I1210 06:28:08.866497  401365 command_runner.go:130] > runtime_config_path = ""
	I1210 06:28:08.866524  401365 command_runner.go:130] > container_min_memory = ""
	I1210 06:28:08.866577  401365 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 06:28:08.866606  401365 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 06:28:08.866632  401365 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 06:28:08.866654  401365 command_runner.go:130] > allowed_annotations = [
	I1210 06:28:08.866675  401365 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1210 06:28:08.866694  401365 command_runner.go:130] > ]
	I1210 06:28:08.866728  401365 command_runner.go:130] > privileged_without_host_devices = false
	I1210 06:28:08.866748  401365 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1210 06:28:08.866769  401365 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1210 06:28:08.866790  401365 command_runner.go:130] > runtime_type = ""
	I1210 06:28:08.866821  401365 command_runner.go:130] > runtime_root = "/run/runc"
	I1210 06:28:08.866840  401365 command_runner.go:130] > inherit_default_runtime = false
	I1210 06:28:08.866860  401365 command_runner.go:130] > runtime_config_path = ""
	I1210 06:28:08.866880  401365 command_runner.go:130] > container_min_memory = ""
	I1210 06:28:08.866908  401365 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 06:28:08.866932  401365 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 06:28:08.866953  401365 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 06:28:08.866974  401365 command_runner.go:130] > privileged_without_host_devices = false
	I1210 06:28:08.867007  401365 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1210 06:28:08.867043  401365 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1210 06:28:08.867068  401365 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1210 06:28:08.867104  401365 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1210 06:28:08.867134  401365 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1210 06:28:08.867162  401365 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1210 06:28:08.867185  401365 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1210 06:28:08.867213  401365 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1210 06:28:08.867246  401365 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1210 06:28:08.867272  401365 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1210 06:28:08.867293  401365 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1210 06:28:08.867324  401365 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1210 06:28:08.867347  401365 command_runner.go:130] > # Example:
	I1210 06:28:08.867368  401365 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1210 06:28:08.867390  401365 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1210 06:28:08.867422  401365 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1210 06:28:08.867444  401365 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1210 06:28:08.867461  401365 command_runner.go:130] > # cpuset = "0-1"
	I1210 06:28:08.867481  401365 command_runner.go:130] > # cpushares = "5"
	I1210 06:28:08.867501  401365 command_runner.go:130] > # cpuquota = "1000"
	I1210 06:28:08.867527  401365 command_runner.go:130] > # cpuperiod = "100000"
	I1210 06:28:08.867550  401365 command_runner.go:130] > # cpulimit = "35"
	I1210 06:28:08.867570  401365 command_runner.go:130] > # Where:
	I1210 06:28:08.867591  401365 command_runner.go:130] > # The workload name is workload-type.
	I1210 06:28:08.867625  401365 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1210 06:28:08.867647  401365 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1210 06:28:08.867667  401365 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1210 06:28:08.867691  401365 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1210 06:28:08.867724  401365 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1210 06:28:08.867747  401365 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1210 06:28:08.867767  401365 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1210 06:28:08.867786  401365 command_runner.go:130] > # Default value is set to true
	I1210 06:28:08.867808  401365 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1210 06:28:08.867842  401365 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1210 06:28:08.867862  401365 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1210 06:28:08.867882  401365 command_runner.go:130] > # Default value is set to 'false'
	I1210 06:28:08.867915  401365 command_runner.go:130] > # disable_hostport_mapping = false
	I1210 06:28:08.867942  401365 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1210 06:28:08.867964  401365 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1210 06:28:08.867982  401365 command_runner.go:130] > # timezone = ""
	I1210 06:28:08.868015  401365 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1210 06:28:08.868041  401365 command_runner.go:130] > #
	I1210 06:28:08.868060  401365 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1210 06:28:08.868081  401365 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1210 06:28:08.868110  401365 command_runner.go:130] > [crio.image]
	I1210 06:28:08.868133  401365 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1210 06:28:08.868150  401365 command_runner.go:130] > # default_transport = "docker://"
	I1210 06:28:08.868170  401365 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1210 06:28:08.868192  401365 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1210 06:28:08.868219  401365 command_runner.go:130] > # global_auth_file = ""
	I1210 06:28:08.868243  401365 command_runner.go:130] > # The image used to instantiate infra containers.
	I1210 06:28:08.868264  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.868284  401365 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1210 06:28:08.868317  401365 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1210 06:28:08.868338  401365 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1210 06:28:08.868357  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.868374  401365 command_runner.go:130] > # pause_image_auth_file = ""
	I1210 06:28:08.868396  401365 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1210 06:28:08.868423  401365 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1210 06:28:08.868450  401365 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1210 06:28:08.868474  401365 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1210 06:28:08.868753  401365 command_runner.go:130] > # pause_command = "/pause"
	I1210 06:28:08.868765  401365 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1210 06:28:08.868772  401365 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1210 06:28:08.868778  401365 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1210 06:28:08.868784  401365 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1210 06:28:08.868791  401365 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1210 06:28:08.868797  401365 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1210 06:28:08.868802  401365 command_runner.go:130] > # pinned_images = [
	I1210 06:28:08.868834  401365 command_runner.go:130] > # ]
	I1210 06:28:08.868841  401365 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1210 06:28:08.868848  401365 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1210 06:28:08.868855  401365 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1210 06:28:08.868864  401365 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1210 06:28:08.868877  401365 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1210 06:28:08.868892  401365 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1210 06:28:08.868897  401365 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1210 06:28:08.868904  401365 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1210 06:28:08.868911  401365 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1210 06:28:08.868917  401365 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1210 06:28:08.868924  401365 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1210 06:28:08.868928  401365 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1210 06:28:08.868935  401365 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1210 06:28:08.868941  401365 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1210 06:28:08.868945  401365 command_runner.go:130] > # changing them here.
	I1210 06:28:08.868950  401365 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1210 06:28:08.868954  401365 command_runner.go:130] > # insecure_registries = [
	I1210 06:28:08.868957  401365 command_runner.go:130] > # ]
	I1210 06:28:08.868964  401365 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1210 06:28:08.868968  401365 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1210 06:28:08.868972  401365 command_runner.go:130] > # image_volumes = "mkdir"
	I1210 06:28:08.868978  401365 command_runner.go:130] > # Temporary directory to use for storing big files
	I1210 06:28:08.868982  401365 command_runner.go:130] > # big_files_temporary_dir = ""
	I1210 06:28:08.868988  401365 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1210 06:28:08.868995  401365 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1210 06:28:08.868999  401365 command_runner.go:130] > # auto_reload_registries = false
	I1210 06:28:08.869006  401365 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1210 06:28:08.869014  401365 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1210 06:28:08.869022  401365 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1210 06:28:08.869027  401365 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1210 06:28:08.869031  401365 command_runner.go:130] > # The mode of short name resolution.
	I1210 06:28:08.869039  401365 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1210 06:28:08.869047  401365 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1210 06:28:08.869051  401365 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1210 06:28:08.869055  401365 command_runner.go:130] > # short_name_mode = "enforcing"
	I1210 06:28:08.869061  401365 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1210 06:28:08.869067  401365 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1210 06:28:08.869299  401365 command_runner.go:130] > # oci_artifact_mount_support = true
	I1210 06:28:08.869316  401365 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1210 06:28:08.869329  401365 command_runner.go:130] > # CNI plugins.
	I1210 06:28:08.869333  401365 command_runner.go:130] > [crio.network]
	I1210 06:28:08.869340  401365 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1210 06:28:08.869346  401365 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1210 06:28:08.869485  401365 command_runner.go:130] > # cni_default_network = ""
	I1210 06:28:08.869502  401365 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1210 06:28:08.869709  401365 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1210 06:28:08.869721  401365 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1210 06:28:08.869725  401365 command_runner.go:130] > # plugin_dirs = [
	I1210 06:28:08.869729  401365 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1210 06:28:08.869732  401365 command_runner.go:130] > # ]
	I1210 06:28:08.869736  401365 command_runner.go:130] > # List of included pod metrics.
	I1210 06:28:08.869740  401365 command_runner.go:130] > # included_pod_metrics = [
	I1210 06:28:08.869743  401365 command_runner.go:130] > # ]
	I1210 06:28:08.869749  401365 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1210 06:28:08.869752  401365 command_runner.go:130] > [crio.metrics]
	I1210 06:28:08.869757  401365 command_runner.go:130] > # Globally enable or disable metrics support.
	I1210 06:28:08.869763  401365 command_runner.go:130] > # enable_metrics = false
	I1210 06:28:08.869767  401365 command_runner.go:130] > # Specify enabled metrics collectors.
	I1210 06:28:08.869772  401365 command_runner.go:130] > # Per default all metrics are enabled.
	I1210 06:28:08.869778  401365 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1210 06:28:08.869785  401365 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1210 06:28:08.869791  401365 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1210 06:28:08.869796  401365 command_runner.go:130] > # metrics_collectors = [
	I1210 06:28:08.869800  401365 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1210 06:28:08.869805  401365 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1210 06:28:08.869809  401365 command_runner.go:130] > # 	"containers_oom_total",
	I1210 06:28:08.869813  401365 command_runner.go:130] > # 	"processes_defunct",
	I1210 06:28:08.869817  401365 command_runner.go:130] > # 	"operations_total",
	I1210 06:28:08.869821  401365 command_runner.go:130] > # 	"operations_latency_seconds",
	I1210 06:28:08.869826  401365 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1210 06:28:08.869830  401365 command_runner.go:130] > # 	"operations_errors_total",
	I1210 06:28:08.869834  401365 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1210 06:28:08.869839  401365 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1210 06:28:08.869843  401365 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1210 06:28:08.869851  401365 command_runner.go:130] > # 	"image_pulls_success_total",
	I1210 06:28:08.869855  401365 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1210 06:28:08.869860  401365 command_runner.go:130] > # 	"containers_oom_count_total",
	I1210 06:28:08.869865  401365 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1210 06:28:08.869873  401365 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1210 06:28:08.869878  401365 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1210 06:28:08.869881  401365 command_runner.go:130] > # ]
	I1210 06:28:08.869887  401365 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1210 06:28:08.869891  401365 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1210 06:28:08.869896  401365 command_runner.go:130] > # The port on which the metrics server will listen.
	I1210 06:28:08.869901  401365 command_runner.go:130] > # metrics_port = 9090
	I1210 06:28:08.869906  401365 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1210 06:28:08.869910  401365 command_runner.go:130] > # metrics_socket = ""
	I1210 06:28:08.869915  401365 command_runner.go:130] > # The certificate for the secure metrics server.
	I1210 06:28:08.869921  401365 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1210 06:28:08.869928  401365 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1210 06:28:08.869934  401365 command_runner.go:130] > # certificate on any modification event.
	I1210 06:28:08.869938  401365 command_runner.go:130] > # metrics_cert = ""
	I1210 06:28:08.869943  401365 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1210 06:28:08.869948  401365 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1210 06:28:08.869963  401365 command_runner.go:130] > # metrics_key = ""
	I1210 06:28:08.869970  401365 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1210 06:28:08.869973  401365 command_runner.go:130] > [crio.tracing]
	I1210 06:28:08.869978  401365 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1210 06:28:08.869982  401365 command_runner.go:130] > # enable_tracing = false
	I1210 06:28:08.869987  401365 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1210 06:28:08.869992  401365 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1210 06:28:08.869999  401365 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1210 06:28:08.870003  401365 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1210 06:28:08.870007  401365 command_runner.go:130] > # CRI-O NRI configuration.
	I1210 06:28:08.870010  401365 command_runner.go:130] > [crio.nri]
	I1210 06:28:08.870014  401365 command_runner.go:130] > # Globally enable or disable NRI.
	I1210 06:28:08.870018  401365 command_runner.go:130] > # enable_nri = true
	I1210 06:28:08.870022  401365 command_runner.go:130] > # NRI socket to listen on.
	I1210 06:28:08.870026  401365 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1210 06:28:08.870031  401365 command_runner.go:130] > # NRI plugin directory to use.
	I1210 06:28:08.870035  401365 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1210 06:28:08.870044  401365 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1210 06:28:08.870049  401365 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1210 06:28:08.870054  401365 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1210 06:28:08.870120  401365 command_runner.go:130] > # nri_disable_connections = false
	I1210 06:28:08.870126  401365 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1210 06:28:08.870131  401365 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1210 06:28:08.870136  401365 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1210 06:28:08.870140  401365 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1210 06:28:08.870144  401365 command_runner.go:130] > # NRI default validator configuration.
	I1210 06:28:08.870151  401365 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1210 06:28:08.870158  401365 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1210 06:28:08.870166  401365 command_runner.go:130] > # can be restricted/rejected:
	I1210 06:28:08.870170  401365 command_runner.go:130] > # - OCI hook injection
	I1210 06:28:08.870176  401365 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1210 06:28:08.870182  401365 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1210 06:28:08.870187  401365 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1210 06:28:08.870192  401365 command_runner.go:130] > # - adjustment of linux namespaces
	I1210 06:28:08.870198  401365 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1210 06:28:08.870204  401365 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1210 06:28:08.870211  401365 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1210 06:28:08.870214  401365 command_runner.go:130] > #
	I1210 06:28:08.870219  401365 command_runner.go:130] > # [crio.nri.default_validator]
	I1210 06:28:08.870224  401365 command_runner.go:130] > # nri_enable_default_validator = false
	I1210 06:28:08.870229  401365 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1210 06:28:08.870235  401365 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1210 06:28:08.870240  401365 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1210 06:28:08.870245  401365 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1210 06:28:08.870249  401365 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1210 06:28:08.870254  401365 command_runner.go:130] > # nri_validator_required_plugins = [
	I1210 06:28:08.870256  401365 command_runner.go:130] > # ]
	I1210 06:28:08.870261  401365 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1210 06:28:08.870267  401365 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1210 06:28:08.870270  401365 command_runner.go:130] > [crio.stats]
	I1210 06:28:08.870279  401365 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1210 06:28:08.870285  401365 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1210 06:28:08.870289  401365 command_runner.go:130] > # stats_collection_period = 0
	I1210 06:28:08.870295  401365 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1210 06:28:08.870301  401365 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1210 06:28:08.870309  401365 command_runner.go:130] > # collection_period = 0
	I1210 06:28:08.872234  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.838776003Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1210 06:28:08.872284  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.838812886Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1210 06:28:08.872309  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.838840094Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1210 06:28:08.872334  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.839193559Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1210 06:28:08.872381  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.839375723Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:08.872413  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.839707715Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1210 06:28:08.872441  401365 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1210 06:28:08.872553  401365 cni.go:84] Creating CNI manager for ""
	I1210 06:28:08.872583  401365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:28:08.872624  401365 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:28:08.872677  401365 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-253997 NodeName:functional-253997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:28:08.872842  401365 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-253997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:28:08.872963  401365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:28:08.882589  401365 command_runner.go:130] > kubeadm
	I1210 06:28:08.882664  401365 command_runner.go:130] > kubectl
	I1210 06:28:08.882683  401365 command_runner.go:130] > kubelet
	I1210 06:28:08.883772  401365 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:28:08.883860  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:28:08.894311  401365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 06:28:08.917477  401365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:28:08.933123  401365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1210 06:28:08.951215  401365 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:28:08.955022  401365 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 06:28:08.955137  401365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:28:09.068336  401365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:28:09.626369  401365 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997 for IP: 192.168.49.2
	I1210 06:28:09.626393  401365 certs.go:195] generating shared ca certs ...
	I1210 06:28:09.626411  401365 certs.go:227] acquiring lock for ca certs: {Name:mk6fdc2cadbd147112797261b2432f4e1e90b685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:09.626560  401365 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key
	I1210 06:28:09.626610  401365 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key
	I1210 06:28:09.626622  401365 certs.go:257] generating profile certs ...
	I1210 06:28:09.626723  401365 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key
	I1210 06:28:09.626797  401365 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key.d56e9423
	I1210 06:28:09.626842  401365 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key
	I1210 06:28:09.626855  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 06:28:09.626868  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 06:28:09.626879  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 06:28:09.626895  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 06:28:09.626917  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 06:28:09.626934  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 06:28:09.626951  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 06:28:09.626967  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 06:28:09.627018  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem (1338 bytes)
	W1210 06:28:09.627054  401365 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265_empty.pem, impossibly tiny 0 bytes
	I1210 06:28:09.627067  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:28:09.627098  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:28:09.627129  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:28:09.627160  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem (1675 bytes)
	I1210 06:28:09.627208  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:28:09.627243  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.627257  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem -> /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.627269  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> /usr/share/ca-certificates/3642652.pem
	I1210 06:28:09.627907  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:28:09.646839  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:28:09.665451  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:28:09.684144  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:28:09.703168  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:28:09.722766  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:28:09.740755  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:28:09.758979  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:28:09.777915  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:28:09.796193  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem --> /usr/share/ca-certificates/364265.pem (1338 bytes)
	I1210 06:28:09.814097  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /usr/share/ca-certificates/3642652.pem (1708 bytes)
	I1210 06:28:09.831978  401365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:28:09.845391  401365 ssh_runner.go:195] Run: openssl version
	I1210 06:28:09.851779  401365 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 06:28:09.852274  401365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.860146  401365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:28:09.868064  401365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.872198  401365 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.872310  401365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.872381  401365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.915298  401365 command_runner.go:130] > b5213941
	I1210 06:28:09.915776  401365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:28:09.923881  401365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.931564  401365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364265.pem /etc/ssl/certs/364265.pem
	I1210 06:28:09.939347  401365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.943515  401365 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.943602  401365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.943706  401365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.984596  401365 command_runner.go:130] > 51391683
	I1210 06:28:09.985095  401365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:28:09.992884  401365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.000682  401365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3642652.pem /etc/ssl/certs/3642652.pem
	I1210 06:28:10.009973  401365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.015475  401365 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.015546  401365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.015611  401365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.058412  401365 command_runner.go:130] > 3ec20f2e
	I1210 06:28:10.059028  401365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:28:10.067481  401365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:28:10.072097  401365 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:28:10.072141  401365 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 06:28:10.072148  401365 command_runner.go:130] > Device: 259,1	Inode: 3906312     Links: 1
	I1210 06:28:10.072155  401365 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:28:10.072162  401365 command_runner.go:130] > Access: 2025-12-10 06:24:00.744386425 +0000
	I1210 06:28:10.072185  401365 command_runner.go:130] > Modify: 2025-12-10 06:19:55.737291822 +0000
	I1210 06:28:10.072211  401365 command_runner.go:130] > Change: 2025-12-10 06:19:55.737291822 +0000
	I1210 06:28:10.072217  401365 command_runner.go:130] >  Birth: 2025-12-10 06:19:55.737291822 +0000
	I1210 06:28:10.072295  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:28:10.114065  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.114701  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:28:10.156441  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.157041  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:28:10.198547  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.198997  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:28:10.239473  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.239921  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:28:10.280741  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.281284  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:28:10.322073  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.322510  401365 kubeadm.go:401] StartCluster: {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:28:10.322592  401365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:28:10.322670  401365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:28:10.349813  401365 cri.go:89] found id: ""
	I1210 06:28:10.349915  401365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:28:10.357053  401365 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 06:28:10.357076  401365 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 06:28:10.357083  401365 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 06:28:10.358087  401365 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:28:10.358107  401365 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:28:10.358179  401365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:28:10.366355  401365 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:28:10.366773  401365 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-253997" does not appear in /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.366892  401365 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-362392/kubeconfig needs updating (will repair): [kubeconfig missing "functional-253997" cluster setting kubeconfig missing "functional-253997" context setting]
	I1210 06:28:10.367176  401365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/kubeconfig: {Name:mk64e356b1eb31ef8982d6b594101aff5f90a6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:10.367620  401365 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.367775  401365 kapi.go:59] client config for functional-253997: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key", CAFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:28:10.368328  401365 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:28:10.368348  401365 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:28:10.368357  401365 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:28:10.368361  401365 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:28:10.368366  401365 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:28:10.368683  401365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:28:10.368778  401365 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 06:28:10.376809  401365 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 06:28:10.376842  401365 kubeadm.go:602] duration metric: took 18.728652ms to restartPrimaryControlPlane
	I1210 06:28:10.376852  401365 kubeadm.go:403] duration metric: took 54.348915ms to StartCluster
	I1210 06:28:10.376867  401365 settings.go:142] acquiring lock: {Name:mk50f3096e55008942432e93c6cf4976a70e957e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:10.376930  401365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.377580  401365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/kubeconfig: {Name:mk64e356b1eb31ef8982d6b594101aff5f90a6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:10.377783  401365 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:28:10.378131  401365 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:28:10.378203  401365 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:28:10.378273  401365 addons.go:70] Setting storage-provisioner=true in profile "functional-253997"
	I1210 06:28:10.378288  401365 addons.go:239] Setting addon storage-provisioner=true in "functional-253997"
	I1210 06:28:10.378298  401365 addons.go:70] Setting default-storageclass=true in profile "functional-253997"
	I1210 06:28:10.378308  401365 host.go:66] Checking if "functional-253997" exists ...
	I1210 06:28:10.378325  401365 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-253997"
	I1210 06:28:10.378609  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:10.378772  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:10.382148  401365 out.go:179] * Verifying Kubernetes components...
	I1210 06:28:10.385829  401365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:28:10.411769  401365 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.411927  401365 kapi.go:59] client config for functional-253997: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key", CAFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:28:10.412189  401365 addons.go:239] Setting addon default-storageclass=true in "functional-253997"
	I1210 06:28:10.412217  401365 host.go:66] Checking if "functional-253997" exists ...
	I1210 06:28:10.412622  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:10.423310  401365 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:28:10.429289  401365 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:10.429319  401365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:28:10.429390  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:10.437508  401365 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:10.437529  401365 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:28:10.437602  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:10.484090  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:10.489523  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:10.601993  401365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:28:10.611397  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:10.637290  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:11.377346  401365 node_ready.go:35] waiting up to 6m0s for node "functional-253997" to be "Ready" ...
	I1210 06:28:11.377544  401365 type.go:168] "Request Body" body=""
	I1210 06:28:11.377656  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.377728  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	W1210 06:28:11.377850  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.377894  401365 retry.go:31] will retry after 259.470683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.378104  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.378200  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.378242  401365 retry.go:31] will retry after 196.4073ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.378345  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:11.575829  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:11.638697  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:11.638779  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.638826  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.638871  401365 retry.go:31] will retry after 208.428392ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.692820  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.696338  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.696370  401365 retry.go:31] will retry after 282.781918ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.847619  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:11.878109  401365 type.go:168] "Request Body" body=""
	I1210 06:28:11.878199  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:11.878519  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:11.905645  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.908839  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.908880  401365 retry.go:31] will retry after 582.02813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.980121  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:12.039691  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:12.043135  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.043170  401365 retry.go:31] will retry after 432.314142ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.377666  401365 type.go:168] "Request Body" body=""
	I1210 06:28:12.377744  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:12.378081  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:12.476496  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:12.492099  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:12.562290  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:12.562336  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.562356  401365 retry.go:31] will retry after 1.009011504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.562409  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:12.562427  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.562433  401365 retry.go:31] will retry after 937.221861ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.877643  401365 type.go:168] "Request Body" body=""
	I1210 06:28:12.877787  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:12.878157  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:13.377677  401365 type.go:168] "Request Body" body=""
	I1210 06:28:13.377754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:13.378100  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:13.378160  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:13.500598  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:13.556443  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:13.560062  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.560116  401365 retry.go:31] will retry after 1.265541277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.572329  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:13.633856  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:13.637464  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.637509  401365 retry.go:31] will retry after 1.331173049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.877793  401365 type.go:168] "Request Body" body=""
	I1210 06:28:13.877888  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:13.878199  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:14.377638  401365 type.go:168] "Request Body" body=""
	I1210 06:28:14.377730  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:14.378082  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:14.825793  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:14.878190  401365 type.go:168] "Request Body" body=""
	I1210 06:28:14.878261  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:14.878521  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:14.884055  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:14.884152  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:14.884201  401365 retry.go:31] will retry after 1.396995132s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:14.969467  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:15.059973  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:15.064387  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:15.064489  401365 retry.go:31] will retry after 957.92161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:15.377700  401365 type.go:168] "Request Body" body=""
	I1210 06:28:15.377772  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:15.378126  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:15.378206  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:15.877555  401365 type.go:168] "Request Body" body=""
	I1210 06:28:15.877664  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:15.877987  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:16.023398  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:16.083212  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:16.083269  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.083288  401365 retry.go:31] will retry after 3.316582994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.281469  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:16.346229  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:16.346265  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.346285  401365 retry.go:31] will retry after 2.05295153s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.378603  401365 type.go:168] "Request Body" body=""
	I1210 06:28:16.378688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:16.379017  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:16.877615  401365 type.go:168] "Request Body" body=""
	I1210 06:28:16.877690  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:16.878019  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:17.377588  401365 type.go:168] "Request Body" body=""
	I1210 06:28:17.377663  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:17.377946  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:17.877651  401365 type.go:168] "Request Body" body=""
	I1210 06:28:17.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:17.878120  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:17.878201  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:18.377673  401365 type.go:168] "Request Body" body=""
	I1210 06:28:18.377749  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:18.378108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:18.400386  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:18.462469  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:18.462509  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:18.462528  401365 retry.go:31] will retry after 3.621738225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:18.877637  401365 type.go:168] "Request Body" body=""
	I1210 06:28:18.877719  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:18.878031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:19.377699  401365 type.go:168] "Request Body" body=""
	I1210 06:28:19.377775  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:19.378123  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:19.400389  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:19.462507  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:19.462542  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:19.462562  401365 retry.go:31] will retry after 6.347571238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:19.878220  401365 type.go:168] "Request Body" body=""
	I1210 06:28:19.878303  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:19.878573  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:19.878624  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:20.378571  401365 type.go:168] "Request Body" body=""
	I1210 06:28:20.378643  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:20.378957  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:20.877707  401365 type.go:168] "Request Body" body=""
	I1210 06:28:20.877781  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:20.878082  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:21.377732  401365 type.go:168] "Request Body" body=""
	I1210 06:28:21.377840  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:21.378217  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:21.877933  401365 type.go:168] "Request Body" body=""
	I1210 06:28:21.878005  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:21.878280  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:22.084823  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:22.150796  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:22.150852  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:22.150872  401365 retry.go:31] will retry after 8.518894464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:22.378239  401365 type.go:168] "Request Body" body=""
	I1210 06:28:22.378314  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:22.378638  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:22.378700  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:22.878392  401365 type.go:168] "Request Body" body=""
	I1210 06:28:22.878470  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:22.878811  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:23.378412  401365 type.go:168] "Request Body" body=""
	I1210 06:28:23.378493  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:23.378816  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:23.878580  401365 type.go:168] "Request Body" body=""
	I1210 06:28:23.878657  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:23.879035  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:24.377745  401365 type.go:168] "Request Body" body=""
	I1210 06:28:24.377829  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:24.378165  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:24.878042  401365 type.go:168] "Request Body" body=""
	I1210 06:28:24.878110  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:24.878379  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:24.878424  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:25.378073  401365 type.go:168] "Request Body" body=""
	I1210 06:28:25.378148  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:25.378492  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:25.811094  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:25.867131  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:25.870279  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:25.870312  401365 retry.go:31] will retry after 4.064346895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:25.878534  401365 type.go:168] "Request Body" body=""
	I1210 06:28:25.878610  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:25.878933  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:26.378357  401365 type.go:168] "Request Body" body=""
	I1210 06:28:26.378423  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:26.378728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:26.878539  401365 type.go:168] "Request Body" body=""
	I1210 06:28:26.878615  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:26.878903  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:26.878950  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:27.377638  401365 type.go:168] "Request Body" body=""
	I1210 06:28:27.377740  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:27.378052  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:27.878386  401365 type.go:168] "Request Body" body=""
	I1210 06:28:27.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:27.878757  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:28.378587  401365 type.go:168] "Request Body" body=""
	I1210 06:28:28.378670  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:28.379016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:28.877671  401365 type.go:168] "Request Body" body=""
	I1210 06:28:28.877745  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:28.878083  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:29.378412  401365 type.go:168] "Request Body" body=""
	I1210 06:28:29.378486  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:29.378756  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:29.378811  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:29.877691  401365 type.go:168] "Request Body" body=""
	I1210 06:28:29.877767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:29.878126  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:29.935383  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:29.993267  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:29.993316  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:29.993335  401365 retry.go:31] will retry after 13.293540925s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:30.377660  401365 type.go:168] "Request Body" body=""
	I1210 06:28:30.377733  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:30.378062  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:30.670723  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:30.731809  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:30.735358  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:30.735395  401365 retry.go:31] will retry after 6.439855049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:30.877636  401365 type.go:168] "Request Body" body=""
	I1210 06:28:30.877707  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:30.878037  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:31.377698  401365 type.go:168] "Request Body" body=""
	I1210 06:28:31.377773  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:31.378112  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:31.877691  401365 type.go:168] "Request Body" body=""
	I1210 06:28:31.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:31.878135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:31.878196  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:32.377829  401365 type.go:168] "Request Body" body=""
	I1210 06:28:32.377902  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:32.378167  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:32.877646  401365 type.go:168] "Request Body" body=""
	I1210 06:28:32.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:32.878081  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:33.377665  401365 type.go:168] "Request Body" body=""
	I1210 06:28:33.377750  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:33.378080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:33.878372  401365 type.go:168] "Request Body" body=""
	I1210 06:28:33.878447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:33.878725  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:33.878768  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:34.378621  401365 type.go:168] "Request Body" body=""
	I1210 06:28:34.378700  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:34.379046  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:34.877880  401365 type.go:168] "Request Body" body=""
	I1210 06:28:34.877952  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:34.878345  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:35.378044  401365 type.go:168] "Request Body" body=""
	I1210 06:28:35.378114  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:35.378389  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:35.878221  401365 type.go:168] "Request Body" body=""
	I1210 06:28:35.878303  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:35.878728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:35.878785  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:36.378584  401365 type.go:168] "Request Body" body=""
	I1210 06:28:36.378665  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:36.379008  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:36.878369  401365 type.go:168] "Request Body" body=""
	I1210 06:28:36.878444  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:36.878707  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:37.176405  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:37.232388  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:37.235885  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:37.235920  401365 retry.go:31] will retry after 10.78688793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:37.378207  401365 type.go:168] "Request Body" body=""
	I1210 06:28:37.378282  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:37.378581  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:37.878419  401365 type.go:168] "Request Body" body=""
	I1210 06:28:37.878495  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:37.878813  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:37.878863  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:38.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:28:38.378474  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:38.378754  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:38.878544  401365 type.go:168] "Request Body" body=""
	I1210 06:28:38.878622  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:38.878987  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:39.377713  401365 type.go:168] "Request Body" body=""
	I1210 06:28:39.377797  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:39.378129  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:39.878083  401365 type.go:168] "Request Body" body=""
	I1210 06:28:39.878150  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:39.878429  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:40.378442  401365 type.go:168] "Request Body" body=""
	I1210 06:28:40.378523  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:40.378856  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:40.378911  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:40.877583  401365 type.go:168] "Request Body" body=""
	I1210 06:28:40.877679  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:40.878034  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:41.378374  401365 type.go:168] "Request Body" body=""
	I1210 06:28:41.378447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:41.378715  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:41.878491  401365 type.go:168] "Request Body" body=""
	I1210 06:28:41.878566  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:41.878923  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:42.377671  401365 type.go:168] "Request Body" body=""
	I1210 06:28:42.377751  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:42.378141  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:42.877599  401365 type.go:168] "Request Body" body=""
	I1210 06:28:42.877683  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:42.877945  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:42.877984  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:43.287649  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:43.346928  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:43.346975  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:43.346995  401365 retry.go:31] will retry after 14.625741063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:43.378235  401365 type.go:168] "Request Body" body=""
	I1210 06:28:43.378315  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:43.378642  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:43.878431  401365 type.go:168] "Request Body" body=""
	I1210 06:28:43.878526  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:43.878848  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:44.378341  401365 type.go:168] "Request Body" body=""
	I1210 06:28:44.378412  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:44.378674  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:44.877586  401365 type.go:168] "Request Body" body=""
	I1210 06:28:44.877680  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:44.878028  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:44.878086  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:45.377798  401365 type.go:168] "Request Body" body=""
	I1210 06:28:45.377879  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:45.378245  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:45.878503  401365 type.go:168] "Request Body" body=""
	I1210 06:28:45.878572  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:45.878831  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:46.378595  401365 type.go:168] "Request Body" body=""
	I1210 06:28:46.378672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:46.378982  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:46.877682  401365 type.go:168] "Request Body" body=""
	I1210 06:28:46.877759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:46.878095  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:46.878155  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:47.377841  401365 type.go:168] "Request Body" body=""
	I1210 06:28:47.377917  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:47.378263  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:47.877992  401365 type.go:168] "Request Body" body=""
	I1210 06:28:47.878078  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:47.878438  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:48.023828  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:48.081536  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:48.084895  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:48.084933  401365 retry.go:31] will retry after 18.097374996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:48.378332  401365 type.go:168] "Request Body" body=""
	I1210 06:28:48.378422  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:48.378753  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:48.878419  401365 type.go:168] "Request Body" body=""
	I1210 06:28:48.878497  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:48.878762  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:48.878816  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:49.378574  401365 type.go:168] "Request Body" body=""
	I1210 06:28:49.378648  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:49.378945  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:49.877700  401365 type.go:168] "Request Body" body=""
	I1210 06:28:49.877800  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:49.878143  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:50.377920  401365 type.go:168] "Request Body" body=""
	I1210 06:28:50.377988  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:50.378294  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:50.877693  401365 type.go:168] "Request Body" body=""
	I1210 06:28:50.877785  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:50.878095  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:51.377686  401365 type.go:168] "Request Body" body=""
	I1210 06:28:51.377791  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:51.378134  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:51.378207  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:51.877781  401365 type.go:168] "Request Body" body=""
	I1210 06:28:51.877851  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:51.878166  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:52.377911  401365 type.go:168] "Request Body" body=""
	I1210 06:28:52.377995  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:52.378322  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:52.878024  401365 type.go:168] "Request Body" body=""
	I1210 06:28:52.878097  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:52.878439  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:53.377622  401365 type.go:168] "Request Body" body=""
	I1210 06:28:53.377713  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:53.378024  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:53.877755  401365 type.go:168] "Request Body" body=""
	I1210 06:28:53.877852  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:53.878190  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:53.878248  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:54.377697  401365 type.go:168] "Request Body" body=""
	I1210 06:28:54.377777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:54.378108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:54.877974  401365 type.go:168] "Request Body" body=""
	I1210 06:28:54.878043  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:54.878312  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:55.378006  401365 type.go:168] "Request Body" body=""
	I1210 06:28:55.378086  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:55.378481  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:55.878103  401365 type.go:168] "Request Body" body=""
	I1210 06:28:55.878195  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:55.878572  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:55.878630  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:56.378220  401365 type.go:168] "Request Body" body=""
	I1210 06:28:56.378297  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:56.378560  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:56.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:28:56.878464  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:56.878801  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:57.378592  401365 type.go:168] "Request Body" body=""
	I1210 06:28:57.378672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:57.379008  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:57.877621  401365 type.go:168] "Request Body" body=""
	I1210 06:28:57.877696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:57.878001  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:57.973321  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:58.030522  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:58.034296  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:58.034334  401365 retry.go:31] will retry after 29.63385811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:58.377818  401365 type.go:168] "Request Body" body=""
	I1210 06:28:58.377897  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:58.378240  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:58.378316  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:58.878004  401365 type.go:168] "Request Body" body=""
	I1210 06:28:58.878100  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:58.878429  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:59.378237  401365 type.go:168] "Request Body" body=""
	I1210 06:28:59.378307  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:59.378610  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:59.878397  401365 type.go:168] "Request Body" body=""
	I1210 06:28:59.878486  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:59.878865  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:00.377830  401365 type.go:168] "Request Body" body=""
	I1210 06:29:00.377911  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:00.378308  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:00.378388  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:00.877903  401365 type.go:168] "Request Body" body=""
	I1210 06:29:00.877979  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:00.878324  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:01.378045  401365 type.go:168] "Request Body" body=""
	I1210 06:29:01.378142  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:01.378492  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:01.878290  401365 type.go:168] "Request Body" body=""
	I1210 06:29:01.878364  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:01.878682  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:02.378481  401365 type.go:168] "Request Body" body=""
	I1210 06:29:02.378563  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:02.378938  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:02.379007  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:02.877673  401365 type.go:168] "Request Body" body=""
	I1210 06:29:02.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:02.878144  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:03.378397  401365 type.go:168] "Request Body" body=""
	I1210 06:29:03.378485  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:03.378752  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:03.878546  401365 type.go:168] "Request Body" body=""
	I1210 06:29:03.878622  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:03.878936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:04.377644  401365 type.go:168] "Request Body" body=""
	I1210 06:29:04.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:04.378092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:04.877915  401365 type.go:168] "Request Body" body=""
	I1210 06:29:04.877993  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:04.878265  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:04.878310  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:05.377970  401365 type.go:168] "Request Body" body=""
	I1210 06:29:05.378056  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:05.378385  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:05.877707  401365 type.go:168] "Request Body" body=""
	I1210 06:29:05.877783  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:05.878096  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:06.182558  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:29:06.240148  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:06.243928  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:29:06.243964  401365 retry.go:31] will retry after 43.852698404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:29:06.378184  401365 type.go:168] "Request Body" body=""
	I1210 06:29:06.378259  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:06.378534  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:06.878434  401365 type.go:168] "Request Body" body=""
	I1210 06:29:06.878516  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:06.878892  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:06.878963  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:07.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:29:07.377787  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:07.378114  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:07.878358  401365 type.go:168] "Request Body" body=""
	I1210 06:29:07.878442  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:07.878707  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:08.378589  401365 type.go:168] "Request Body" body=""
	I1210 06:29:08.378685  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:08.379006  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:08.877738  401365 type.go:168] "Request Body" body=""
	I1210 06:29:08.877836  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:08.878152  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:09.377599  401365 type.go:168] "Request Body" body=""
	I1210 06:29:09.377678  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:09.377964  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:09.378009  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:09.878613  401365 type.go:168] "Request Body" body=""
	I1210 06:29:09.878706  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:09.879055  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:10.377636  401365 type.go:168] "Request Body" body=""
	I1210 06:29:10.377729  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:10.378057  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:10.878414  401365 type.go:168] "Request Body" body=""
	I1210 06:29:10.878485  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:10.878853  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:11.377614  401365 type.go:168] "Request Body" body=""
	I1210 06:29:11.377691  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:11.378087  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:11.378157  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:11.877843  401365 type.go:168] "Request Body" body=""
	I1210 06:29:11.877917  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:11.878206  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:12.377859  401365 type.go:168] "Request Body" body=""
	I1210 06:29:12.377937  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:12.378284  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:12.877669  401365 type.go:168] "Request Body" body=""
	I1210 06:29:12.877752  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:12.878075  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:13.377664  401365 type.go:168] "Request Body" body=""
	I1210 06:29:13.377747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:13.378093  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:13.878416  401365 type.go:168] "Request Body" body=""
	I1210 06:29:13.878494  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:13.878809  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:13.878870  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:14.377568  401365 type.go:168] "Request Body" body=""
	I1210 06:29:14.377643  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:14.377998  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:14.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:29:14.877755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:14.878113  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:15.377598  401365 type.go:168] "Request Body" body=""
	I1210 06:29:15.377677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:15.377997  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:15.877667  401365 type.go:168] "Request Body" body=""
	I1210 06:29:15.877746  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:15.878091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:16.377666  401365 type.go:168] "Request Body" body=""
	I1210 06:29:16.377769  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:16.378076  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:16.378122  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:16.877619  401365 type.go:168] "Request Body" body=""
	I1210 06:29:16.877702  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:16.878021  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:17.377665  401365 type.go:168] "Request Body" body=""
	I1210 06:29:17.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:17.378090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:17.877642  401365 type.go:168] "Request Body" body=""
	I1210 06:29:17.877734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:17.878072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:18.378371  401365 type.go:168] "Request Body" body=""
	I1210 06:29:18.378462  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:18.378766  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:18.378828  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:18.878580  401365 type.go:168] "Request Body" body=""
	I1210 06:29:18.878658  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:18.879021  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:19.377663  401365 type.go:168] "Request Body" body=""
	I1210 06:29:19.377746  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:19.378079  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:19.877904  401365 type.go:168] "Request Body" body=""
	I1210 06:29:19.878012  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:19.878270  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:20.378288  401365 type.go:168] "Request Body" body=""
	I1210 06:29:20.378362  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:20.378707  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:20.878519  401365 type.go:168] "Request Body" body=""
	I1210 06:29:20.878594  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:20.878915  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:20.878972  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:21.377633  401365 type.go:168] "Request Body" body=""
	I1210 06:29:21.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:21.378102  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:21.877674  401365 type.go:168] "Request Body" body=""
	I1210 06:29:21.877777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:21.878104  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:22.377691  401365 type.go:168] "Request Body" body=""
	I1210 06:29:22.377786  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:22.378137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:22.877604  401365 type.go:168] "Request Body" body=""
	I1210 06:29:22.877691  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:22.877964  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:23.377675  401365 type.go:168] "Request Body" body=""
	I1210 06:29:23.377762  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:23.378116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:23.378205  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:23.877699  401365 type.go:168] "Request Body" body=""
	I1210 06:29:23.877817  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:23.878164  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:24.377849  401365 type.go:168] "Request Body" body=""
	I1210 06:29:24.377937  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:24.378276  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:24.878345  401365 type.go:168] "Request Body" body=""
	I1210 06:29:24.878419  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:24.878834  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:25.378514  401365 type.go:168] "Request Body" body=""
	I1210 06:29:25.378602  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:25.378940  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:25.378995  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:25.878340  401365 type.go:168] "Request Body" body=""
	I1210 06:29:25.878408  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:25.878688  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:26.378495  401365 type.go:168] "Request Body" body=""
	I1210 06:29:26.378583  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:26.378915  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:26.877643  401365 type.go:168] "Request Body" body=""
	I1210 06:29:26.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:26.878074  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:27.378388  401365 type.go:168] "Request Body" body=""
	I1210 06:29:27.378458  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:27.378728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:27.669323  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:29:27.726986  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:27.731088  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:27.731190  401365 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:29:27.878451  401365 type.go:168] "Request Body" body=""
	I1210 06:29:27.878523  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:27.878853  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:27.878910  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:28.378489  401365 type.go:168] "Request Body" body=""
	I1210 06:29:28.378564  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:28.378901  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:28.878380  401365 type.go:168] "Request Body" body=""
	I1210 06:29:28.878458  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:28.878719  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:29.378449  401365 type.go:168] "Request Body" body=""
	I1210 06:29:29.378529  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:29.378849  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:29.877584  401365 type.go:168] "Request Body" body=""
	I1210 06:29:29.877661  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:29.878000  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:30.377937  401365 type.go:168] "Request Body" body=""
	I1210 06:29:30.378012  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:30.378326  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:30.378387  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:30.877919  401365 type.go:168] "Request Body" body=""
	I1210 06:29:30.878019  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:30.878352  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:31.377915  401365 type.go:168] "Request Body" body=""
	I1210 06:29:31.378002  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:31.378351  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:31.878025  401365 type.go:168] "Request Body" body=""
	I1210 06:29:31.878128  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:31.878461  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:32.378235  401365 type.go:168] "Request Body" body=""
	I1210 06:29:32.378305  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:32.378637  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:32.378712  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:32.878497  401365 type.go:168] "Request Body" body=""
	I1210 06:29:32.878570  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:32.878897  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:33.378428  401365 type.go:168] "Request Body" body=""
	I1210 06:29:33.378500  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:33.378769  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:33.877562  401365 type.go:168] "Request Body" body=""
	I1210 06:29:33.877640  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:33.877963  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:34.377713  401365 type.go:168] "Request Body" body=""
	I1210 06:29:34.377821  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:34.378167  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:34.877924  401365 type.go:168] "Request Body" body=""
	I1210 06:29:34.877993  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:34.878306  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:34.878365  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:35.378234  401365 type.go:168] "Request Body" body=""
	I1210 06:29:35.378332  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:35.378669  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:35.878465  401365 type.go:168] "Request Body" body=""
	I1210 06:29:35.878539  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:35.878861  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:36.378415  401365 type.go:168] "Request Body" body=""
	I1210 06:29:36.378520  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:36.378846  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:36.877600  401365 type.go:168] "Request Body" body=""
	I1210 06:29:36.877689  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:36.878017  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:37.377713  401365 type.go:168] "Request Body" body=""
	I1210 06:29:37.377800  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:37.378154  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:37.378223  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:37.878379  401365 type.go:168] "Request Body" body=""
	I1210 06:29:37.878466  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:37.878806  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:38.378634  401365 type.go:168] "Request Body" body=""
	I1210 06:29:38.378721  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:38.379089  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:38.877647  401365 type.go:168] "Request Body" body=""
	I1210 06:29:38.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:38.878098  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:39.377834  401365 type.go:168] "Request Body" body=""
	I1210 06:29:39.377905  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:39.378160  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:39.878109  401365 type.go:168] "Request Body" body=""
	I1210 06:29:39.878184  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:39.878538  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:39.878595  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:40.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:29:40.378476  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:40.378793  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:40.878462  401365 type.go:168] "Request Body" body=""
	I1210 06:29:40.878582  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:40.878971  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:41.377652  401365 type.go:168] "Request Body" body=""
	I1210 06:29:41.377732  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:41.378135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:41.877884  401365 type.go:168] "Request Body" body=""
	I1210 06:29:41.877962  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:41.878325  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:42.377611  401365 type.go:168] "Request Body" body=""
	I1210 06:29:42.377693  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:42.378065  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:42.378123  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:42.877666  401365 type.go:168] "Request Body" body=""
	I1210 06:29:42.877738  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:42.878090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:43.377808  401365 type.go:168] "Request Body" body=""
	I1210 06:29:43.377882  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:43.378222  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:43.877625  401365 type.go:168] "Request Body" body=""
	I1210 06:29:43.877697  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:43.877990  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:44.377652  401365 type.go:168] "Request Body" body=""
	I1210 06:29:44.377728  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:44.378065  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:44.877919  401365 type.go:168] "Request Body" body=""
	I1210 06:29:44.878017  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:44.878351  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:44.878422  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:45.378184  401365 type.go:168] "Request Body" body=""
	I1210 06:29:45.378259  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:45.378530  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:45.878292  401365 type.go:168] "Request Body" body=""
	I1210 06:29:45.878369  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:45.878717  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:46.378381  401365 type.go:168] "Request Body" body=""
	I1210 06:29:46.378455  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:46.378778  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:46.878419  401365 type.go:168] "Request Body" body=""
	I1210 06:29:46.878504  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:46.878818  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:46.878868  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:47.377582  401365 type.go:168] "Request Body" body=""
	I1210 06:29:47.377662  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:47.378008  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:47.878425  401365 type.go:168] "Request Body" body=""
	I1210 06:29:47.878508  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:47.878839  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:48.378385  401365 type.go:168] "Request Body" body=""
	I1210 06:29:48.378477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:48.378769  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:48.878588  401365 type.go:168] "Request Body" body=""
	I1210 06:29:48.878669  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:48.878986  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:48.879047  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:49.377711  401365 type.go:168] "Request Body" body=""
	I1210 06:29:49.377790  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:49.378153  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:49.878038  401365 type.go:168] "Request Body" body=""
	I1210 06:29:49.878111  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:49.878364  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:50.096947  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:29:50.160267  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:50.160316  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:50.160396  401365 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:29:50.163553  401365 out.go:179] * Enabled addons: 
	I1210 06:29:50.167218  401365 addons.go:530] duration metric: took 1m39.789022145s for enable addons: enabled=[]
	I1210 06:29:50.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:29:50.377718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:50.378070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:50.877691  401365 type.go:168] "Request Body" body=""
	I1210 06:29:50.877785  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:50.878103  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:51.378394  401365 type.go:168] "Request Body" body=""
	I1210 06:29:51.378472  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:51.378746  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:51.378813  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:51.878588  401365 type.go:168] "Request Body" body=""
	I1210 06:29:51.878669  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:51.878981  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:52.377564  401365 type.go:168] "Request Body" body=""
	I1210 06:29:52.377654  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:52.378002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:52.878385  401365 type.go:168] "Request Body" body=""
	I1210 06:29:52.878456  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:52.878735  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:53.378623  401365 type.go:168] "Request Body" body=""
	I1210 06:29:53.378696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:53.379007  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:53.379062  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:53.877727  401365 type.go:168] "Request Body" body=""
	I1210 06:29:53.877818  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:53.878163  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:54.377608  401365 type.go:168] "Request Body" body=""
	I1210 06:29:54.377697  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:54.378015  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:54.877713  401365 type.go:168] "Request Body" body=""
	I1210 06:29:54.877810  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:54.878175  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:55.377895  401365 type.go:168] "Request Body" body=""
	I1210 06:29:55.377968  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:55.378309  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:55.877991  401365 type.go:168] "Request Body" body=""
	I1210 06:29:55.878064  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:55.878416  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:55.878476  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:56.378216  401365 type.go:168] "Request Body" body=""
	I1210 06:29:56.378295  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:56.378666  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:56.878479  401365 type.go:168] "Request Body" body=""
	I1210 06:29:56.878557  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:56.878903  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:57.378397  401365 type.go:168] "Request Body" body=""
	I1210 06:29:57.378477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:57.378742  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:57.878382  401365 type.go:168] "Request Body" body=""
	I1210 06:29:57.878456  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:57.878755  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:57.878801  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:58.378559  401365 type.go:168] "Request Body" body=""
	I1210 06:29:58.378645  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:58.378936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:58.877600  401365 type.go:168] "Request Body" body=""
	I1210 06:29:58.877684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:58.877957  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:59.377641  401365 type.go:168] "Request Body" body=""
	I1210 06:29:59.377718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:59.378043  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:59.878036  401365 type.go:168] "Request Body" body=""
	I1210 06:29:59.878111  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:59.878453  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:00.403040  401365 type.go:168] "Request Body" body=""
	I1210 06:30:00.403489  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:00.403971  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:00.404065  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:00.877628  401365 type.go:168] "Request Body" body=""
	I1210 06:30:00.877715  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:00.878111  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:01.378405  401365 type.go:168] "Request Body" body=""
	I1210 06:30:01.378490  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:01.378858  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:01.878587  401365 type.go:168] "Request Body" body=""
	I1210 06:30:01.878670  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:01.879048  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:02.377809  401365 type.go:168] "Request Body" body=""
	I1210 06:30:02.377884  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:02.378218  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:02.877618  401365 type.go:168] "Request Body" body=""
	I1210 06:30:02.877691  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:02.877969  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:02.878012  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:03.377736  401365 type.go:168] "Request Body" body=""
	I1210 06:30:03.377819  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:03.378180  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:03.877919  401365 type.go:168] "Request Body" body=""
	I1210 06:30:03.878016  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:03.878393  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:04.378222  401365 type.go:168] "Request Body" body=""
	I1210 06:30:04.378313  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:04.378635  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:04.878431  401365 type.go:168] "Request Body" body=""
	I1210 06:30:04.878505  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:04.879753  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1210 06:30:04.879813  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:05.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:30:05.378482  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:05.378830  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:05.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:30:05.878480  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:05.878741  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:06.378628  401365 type.go:168] "Request Body" body=""
	I1210 06:30:06.378703  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:06.379023  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:06.877690  401365 type.go:168] "Request Body" body=""
	I1210 06:30:06.877764  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:06.878119  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:07.377808  401365 type.go:168] "Request Body" body=""
	I1210 06:30:07.377895  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:07.378245  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:07.378302  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:07.877669  401365 type.go:168] "Request Body" body=""
	I1210 06:30:07.877760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:07.878098  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:08.377849  401365 type.go:168] "Request Body" body=""
	I1210 06:30:08.377929  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:08.378272  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:08.877616  401365 type.go:168] "Request Body" body=""
	I1210 06:30:08.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:08.878027  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:09.378016  401365 type.go:168] "Request Body" body=""
	I1210 06:30:09.378098  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:09.378433  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:09.378480  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:09.878345  401365 type.go:168] "Request Body" body=""
	I1210 06:30:09.878427  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:09.878801  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:10.378580  401365 type.go:168] "Request Body" body=""
	I1210 06:30:10.378704  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:10.379089  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:10.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:30:10.877745  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:10.878101  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:11.377836  401365 type.go:168] "Request Body" body=""
	I1210 06:30:11.377918  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:11.378278  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:11.877990  401365 type.go:168] "Request Body" body=""
	I1210 06:30:11.878058  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:11.878328  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:11.878370  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:12.377682  401365 type.go:168] "Request Body" body=""
	I1210 06:30:12.377762  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:12.378131  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:12.877864  401365 type.go:168] "Request Body" body=""
	I1210 06:30:12.877940  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:12.878290  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:13.377986  401365 type.go:168] "Request Body" body=""
	I1210 06:30:13.378060  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:13.378390  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:13.878180  401365 type.go:168] "Request Body" body=""
	I1210 06:30:13.878256  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:13.878586  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:13.878648  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:14.378401  401365 type.go:168] "Request Body" body=""
	I1210 06:30:14.378479  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:14.378827  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:14.878395  401365 type.go:168] "Request Body" body=""
	I1210 06:30:14.878479  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:14.878758  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:15.378543  401365 type.go:168] "Request Body" body=""
	I1210 06:30:15.378623  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:15.378945  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:15.877679  401365 type.go:168] "Request Body" body=""
	I1210 06:30:15.877759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:15.878101  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:16.377593  401365 type.go:168] "Request Body" body=""
	I1210 06:30:16.377664  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:16.377962  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:16.378009  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:16.877684  401365 type.go:168] "Request Body" body=""
	I1210 06:30:16.877760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:16.878132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:17.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:30:17.377724  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:17.378099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:17.877591  401365 type.go:168] "Request Body" body=""
	I1210 06:30:17.877703  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:17.878030  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:18.377710  401365 type.go:168] "Request Body" body=""
	I1210 06:30:18.377789  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:18.378142  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:18.378208  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:18.877756  401365 type.go:168] "Request Body" body=""
	I1210 06:30:18.877843  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:18.878196  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:19.377801  401365 type.go:168] "Request Body" body=""
	I1210 06:30:19.377880  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:19.378158  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:19.878182  401365 type.go:168] "Request Body" body=""
	I1210 06:30:19.878260  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:19.878613  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:20.378479  401365 type.go:168] "Request Body" body=""
	I1210 06:30:20.378562  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:20.378922  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:20.378995  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:20.878437  401365 type.go:168] "Request Body" body=""
	I1210 06:30:20.878515  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:20.878801  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:21.378593  401365 type.go:168] "Request Body" body=""
	I1210 06:30:21.378678  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:21.379014  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:21.877727  401365 type.go:168] "Request Body" body=""
	I1210 06:30:21.877805  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:21.878139  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:22.377644  401365 type.go:168] "Request Body" body=""
	I1210 06:30:22.377720  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:22.378036  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:22.877631  401365 type.go:168] "Request Body" body=""
	I1210 06:30:22.877708  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:22.878077  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:22.878133  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:23.377664  401365 type.go:168] "Request Body" body=""
	I1210 06:30:23.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:23.378132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:23.877609  401365 type.go:168] "Request Body" body=""
	I1210 06:30:23.877684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:23.878013  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:24.377728  401365 type.go:168] "Request Body" body=""
	I1210 06:30:24.377803  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:24.378189  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:24.878072  401365 type.go:168] "Request Body" body=""
	I1210 06:30:24.878208  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:24.878537  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:24.878592  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:25.378359  401365 type.go:168] "Request Body" body=""
	I1210 06:30:25.378444  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:25.378710  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:25.878517  401365 type.go:168] "Request Body" body=""
	I1210 06:30:25.878613  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:25.878961  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:26.377659  401365 type.go:168] "Request Body" body=""
	I1210 06:30:26.377737  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:26.378086  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:26.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:30:26.878468  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:26.878744  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:26.878791  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:27.378535  401365 type.go:168] "Request Body" body=""
	I1210 06:30:27.378611  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:27.378947  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:27.877649  401365 type.go:168] "Request Body" body=""
	I1210 06:30:27.877732  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:27.878085  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:28.377643  401365 type.go:168] "Request Body" body=""
	I1210 06:30:28.377718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:28.378171  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:28.877894  401365 type.go:168] "Request Body" body=""
	I1210 06:30:28.877977  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:28.878324  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:29.378072  401365 type.go:168] "Request Body" body=""
	I1210 06:30:29.378156  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:29.378530  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:29.378586  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:29.878257  401365 type.go:168] "Request Body" body=""
	I1210 06:30:29.878331  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:29.878620  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:30.377624  401365 type.go:168] "Request Body" body=""
	I1210 06:30:30.377719  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:30.378103  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:30.877807  401365 type.go:168] "Request Body" body=""
	I1210 06:30:30.877939  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:30.878264  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:31.377983  401365 type.go:168] "Request Body" body=""
	I1210 06:30:31.378059  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:31.378337  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:31.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:30:31.877748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:31.878104  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:31.878164  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:32.377881  401365 type.go:168] "Request Body" body=""
	I1210 06:30:32.377966  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:32.378312  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:32.877995  401365 type.go:168] "Request Body" body=""
	I1210 06:30:32.878071  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:32.878437  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:33.378235  401365 type.go:168] "Request Body" body=""
	I1210 06:30:33.378311  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:33.378664  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:33.878393  401365 type.go:168] "Request Body" body=""
	I1210 06:30:33.878477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:33.878789  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:33.878839  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:34.378385  401365 type.go:168] "Request Body" body=""
	I1210 06:30:34.378460  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:34.378856  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:34.877875  401365 type.go:168] "Request Body" body=""
	I1210 06:30:34.877953  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:34.878307  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:35.377692  401365 type.go:168] "Request Body" body=""
	I1210 06:30:35.377807  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:35.378225  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:35.877600  401365 type.go:168] "Request Body" body=""
	I1210 06:30:35.877677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:35.878020  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:36.377715  401365 type.go:168] "Request Body" body=""
	I1210 06:30:36.377791  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:36.378143  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:36.378205  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:36.877696  401365 type.go:168] "Request Body" body=""
	I1210 06:30:36.877774  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:36.878137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:37.378398  401365 type.go:168] "Request Body" body=""
	I1210 06:30:37.378477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:37.378746  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:37.878553  401365 type.go:168] "Request Body" body=""
	I1210 06:30:37.878672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:37.879091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:38.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:30:38.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:38.378109  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:38.877617  401365 type.go:168] "Request Body" body=""
	I1210 06:30:38.877690  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:38.877965  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:38.878020  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:39.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:30:39.377729  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:39.378078  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:39.877852  401365 type.go:168] "Request Body" body=""
	I1210 06:30:39.878005  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:39.878296  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:40.378297  401365 type.go:168] "Request Body" body=""
	I1210 06:30:40.378419  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:40.378746  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:40.878609  401365 type.go:168] "Request Body" body=""
	I1210 06:30:40.878695  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:40.879047  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:40.879109  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:41.377677  401365 type.go:168] "Request Body" body=""
	I1210 06:30:41.377761  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:41.378136  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:41.877816  401365 type.go:168] "Request Body" body=""
	I1210 06:30:41.877897  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:41.878247  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:42.377681  401365 type.go:168] "Request Body" body=""
	I1210 06:30:42.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:42.378160  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:42.877905  401365 type.go:168] "Request Body" body=""
	I1210 06:30:42.877988  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:42.878334  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:43.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:30:43.377686  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:43.378002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:43.378054  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:43.877669  401365 type.go:168] "Request Body" body=""
	I1210 06:30:43.877751  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:43.878145  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:44.377872  401365 type.go:168] "Request Body" body=""
	I1210 06:30:44.377977  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:44.378341  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:44.878225  401365 type.go:168] "Request Body" body=""
	I1210 06:30:44.878299  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:44.878563  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:45.378360  401365 type.go:168] "Request Body" body=""
	I1210 06:30:45.378435  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:45.378860  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:45.378937  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:45.878557  401365 type.go:168] "Request Body" body=""
	I1210 06:30:45.878640  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:45.878996  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:46.378357  401365 type.go:168] "Request Body" body=""
	I1210 06:30:46.378429  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:46.378738  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:46.878533  401365 type.go:168] "Request Body" body=""
	I1210 06:30:46.878610  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:46.878947  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:47.377691  401365 type.go:168] "Request Body" body=""
	I1210 06:30:47.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:47.378135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:47.878384  401365 type.go:168] "Request Body" body=""
	I1210 06:30:47.878498  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:47.878783  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:47.878827  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:48.378583  401365 type.go:168] "Request Body" body=""
	I1210 06:30:48.378670  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:48.379006  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:48.877596  401365 type.go:168] "Request Body" body=""
	I1210 06:30:48.877674  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:48.878023  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:49.377609  401365 type.go:168] "Request Body" body=""
	I1210 06:30:49.377685  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:49.377965  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:49.877909  401365 type.go:168] "Request Body" body=""
	I1210 06:30:49.877985  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:49.878310  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:50.378111  401365 type.go:168] "Request Body" body=""
	I1210 06:30:50.378203  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:50.378557  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:50.378619  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:50.878363  401365 type.go:168] "Request Body" body=""
	I1210 06:30:50.878438  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:50.878702  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:51.378562  401365 type.go:168] "Request Body" body=""
	I1210 06:30:51.378644  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:51.378985  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:51.877673  401365 type.go:168] "Request Body" body=""
	I1210 06:30:51.877755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:51.878129  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:52.377603  401365 type.go:168] "Request Body" body=""
	I1210 06:30:52.377672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:52.377985  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:52.877662  401365 type.go:168] "Request Body" body=""
	I1210 06:30:52.877741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:52.878113  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:52.878172  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:53.377842  401365 type.go:168] "Request Body" body=""
	I1210 06:30:53.377929  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:53.378271  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:53.877988  401365 type.go:168] "Request Body" body=""
	I1210 06:30:53.878059  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:53.878397  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:54.378229  401365 type.go:168] "Request Body" body=""
	I1210 06:30:54.378302  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:54.378632  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:54.878381  401365 type.go:168] "Request Body" body=""
	I1210 06:30:54.878460  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:54.878809  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:54.878867  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:55.378406  401365 type.go:168] "Request Body" body=""
	I1210 06:30:55.378491  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:55.378761  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:55.878532  401365 type.go:168] "Request Body" body=""
	I1210 06:30:55.878631  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:55.878979  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:56.377687  401365 type.go:168] "Request Body" body=""
	I1210 06:30:56.377765  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:56.378102  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:56.878412  401365 type.go:168] "Request Body" body=""
	I1210 06:30:56.878480  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:56.878765  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:57.378590  401365 type.go:168] "Request Body" body=""
	I1210 06:30:57.378667  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:57.379010  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:57.379066  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:57.877659  401365 type.go:168] "Request Body" body=""
	I1210 06:30:57.877736  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:57.878094  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:58.377804  401365 type.go:168] "Request Body" body=""
	I1210 06:30:58.377882  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:58.378161  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:58.877653  401365 type.go:168] "Request Body" body=""
	I1210 06:30:58.877724  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:58.878038  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:59.377678  401365 type.go:168] "Request Body" body=""
	I1210 06:30:59.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:59.378090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:59.878022  401365 type.go:168] "Request Body" body=""
	I1210 06:30:59.878105  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:59.878446  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:59.878509  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:00.377586  401365 type.go:168] "Request Body" body=""
	I1210 06:31:00.377680  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:00.378151  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:00.877892  401365 type.go:168] "Request Body" body=""
	I1210 06:31:00.877975  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:00.878336  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:01.377928  401365 type.go:168] "Request Body" body=""
	I1210 06:31:01.378000  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:01.378269  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:01.877906  401365 type.go:168] "Request Body" body=""
	I1210 06:31:01.877996  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:01.878438  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:02.377746  401365 type.go:168] "Request Body" body=""
	I1210 06:31:02.377823  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:02.378191  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:02.378256  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:02.878389  401365 type.go:168] "Request Body" body=""
	I1210 06:31:02.878461  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:02.878756  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:03.378549  401365 type.go:168] "Request Body" body=""
	I1210 06:31:03.378628  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:03.378977  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:03.877675  401365 type.go:168] "Request Body" body=""
	I1210 06:31:03.877754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:03.878104  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:04.377643  401365 type.go:168] "Request Body" body=""
	I1210 06:31:04.377719  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:04.378069  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:04.878124  401365 type.go:168] "Request Body" body=""
	I1210 06:31:04.878218  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:04.878572  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:04.878635  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:05.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:31:05.378481  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:05.378786  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:05.878376  401365 type.go:168] "Request Body" body=""
	I1210 06:31:05.878447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:05.878782  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:06.378579  401365 type.go:168] "Request Body" body=""
	I1210 06:31:06.378655  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:06.379033  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:06.877752  401365 type.go:168] "Request Body" body=""
	I1210 06:31:06.877828  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:06.878145  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:07.377614  401365 type.go:168] "Request Body" body=""
	I1210 06:31:07.377703  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:07.378053  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:07.378103  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:07.877679  401365 type.go:168] "Request Body" body=""
	I1210 06:31:07.877774  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:07.878132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:08.377692  401365 type.go:168] "Request Body" body=""
	I1210 06:31:08.377773  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:08.378135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:08.877811  401365 type.go:168] "Request Body" body=""
	I1210 06:31:08.877884  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:08.878180  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:09.377668  401365 type.go:168] "Request Body" body=""
	I1210 06:31:09.377754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:09.378101  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:09.378155  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:09.877923  401365 type.go:168] "Request Body" body=""
	I1210 06:31:09.877999  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:09.878321  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:10.378307  401365 type.go:168] "Request Body" body=""
	I1210 06:31:10.378386  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:10.378650  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:10.878423  401365 type.go:168] "Request Body" body=""
	I1210 06:31:10.878500  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:10.878869  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:11.378503  401365 type.go:168] "Request Body" body=""
	I1210 06:31:11.378584  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:11.378952  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:11.379008  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:11.878378  401365 type.go:168] "Request Body" body=""
	I1210 06:31:11.878450  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:11.878715  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:12.378514  401365 type.go:168] "Request Body" body=""
	I1210 06:31:12.378591  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:12.378905  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:12.877633  401365 type.go:168] "Request Body" body=""
	I1210 06:31:12.877711  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:12.878080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:13.378362  401365 type.go:168] "Request Body" body=""
	I1210 06:31:13.378431  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:13.378728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:13.878515  401365 type.go:168] "Request Body" body=""
	I1210 06:31:13.878592  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:13.878918  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:13.878976  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:14.377681  401365 type.go:168] "Request Body" body=""
	I1210 06:31:14.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:14.378115  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:14.878072  401365 type.go:168] "Request Body" body=""
	I1210 06:31:14.878147  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:14.878405  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:15.378262  401365 type.go:168] "Request Body" body=""
	I1210 06:31:15.378345  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:15.378686  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:15.878492  401365 type.go:168] "Request Body" body=""
	I1210 06:31:15.878569  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:15.878935  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:16.378356  401365 type.go:168] "Request Body" body=""
	I1210 06:31:16.378441  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:16.378690  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:16.378731  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:16.878535  401365 type.go:168] "Request Body" body=""
	I1210 06:31:16.878609  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:16.878944  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:17.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:31:17.377753  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:17.378118  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:17.877723  401365 type.go:168] "Request Body" body=""
	I1210 06:31:17.877797  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:17.878157  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:18.377666  401365 type.go:168] "Request Body" body=""
	I1210 06:31:18.377744  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:18.378082  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:18.877660  401365 type.go:168] "Request Body" body=""
	I1210 06:31:18.877734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:18.878083  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:18.878141  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:19.378341  401365 type.go:168] "Request Body" body=""
	I1210 06:31:19.378417  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:19.378680  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:19.878385  401365 type.go:168] "Request Body" body=""
	I1210 06:31:19.878484  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:19.878844  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:20.377537  401365 type.go:168] "Request Body" body=""
	I1210 06:31:20.377620  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:20.377967  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:20.877662  401365 type.go:168] "Request Body" body=""
	I1210 06:31:20.877748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:20.878176  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:20.878224  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:21.377644  401365 type.go:168] "Request Body" body=""
	I1210 06:31:21.377723  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:21.378064  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:21.877799  401365 type.go:168] "Request Body" body=""
	I1210 06:31:21.877892  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:21.878256  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:22.377991  401365 type.go:168] "Request Body" body=""
	I1210 06:31:22.378069  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:22.378361  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:22.877671  401365 type.go:168] "Request Body" body=""
	I1210 06:31:22.877765  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:22.878106  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:23.377677  401365 type.go:168] "Request Body" body=""
	I1210 06:31:23.377772  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:23.378156  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:23.378228  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:23.877603  401365 type.go:168] "Request Body" body=""
	I1210 06:31:23.877676  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:23.877972  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:24.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:31:24.377753  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:24.378120  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:24.877983  401365 type.go:168] "Request Body" body=""
	I1210 06:31:24.878078  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:24.878441  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:25.378227  401365 type.go:168] "Request Body" body=""
	I1210 06:31:25.378296  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:25.378552  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:25.378598  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:25.878364  401365 type.go:168] "Request Body" body=""
	I1210 06:31:25.878444  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:25.878791  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:26.377537  401365 type.go:168] "Request Body" body=""
	I1210 06:31:26.377611  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:26.377959  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:26.878388  401365 type.go:168] "Request Body" body=""
	I1210 06:31:26.878456  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:26.878725  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:27.378513  401365 type.go:168] "Request Body" body=""
	I1210 06:31:27.378591  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:27.378938  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:27.378993  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:27.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:31:27.877739  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:27.878092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:28.378425  401365 type.go:168] "Request Body" body=""
	I1210 06:31:28.378506  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:28.378821  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:28.877546  401365 type.go:168] "Request Body" body=""
	I1210 06:31:28.877631  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:28.878002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:29.377725  401365 type.go:168] "Request Body" body=""
	I1210 06:31:29.377802  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:29.378176  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:29.878060  401365 type.go:168] "Request Body" body=""
	I1210 06:31:29.878133  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:29.878404  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:29.878448  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:30.378433  401365 type.go:168] "Request Body" body=""
	I1210 06:31:30.378508  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:30.378874  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:30.877621  401365 type.go:168] "Request Body" body=""
	I1210 06:31:30.877699  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:30.878072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:31.377633  401365 type.go:168] "Request Body" body=""
	I1210 06:31:31.377704  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:31.378026  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:31.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:31:31.877748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:31.878092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:32.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:31:32.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:32.378156  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:32.378215  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:32.878393  401365 type.go:168] "Request Body" body=""
	I1210 06:31:32.878461  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:32.878721  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:33.378508  401365 type.go:168] "Request Body" body=""
	I1210 06:31:33.378585  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:33.379111  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:33.877686  401365 type.go:168] "Request Body" body=""
	I1210 06:31:33.877775  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:33.878146  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:34.377671  401365 type.go:168] "Request Body" body=""
	I1210 06:31:34.377743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:34.378043  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:34.877949  401365 type.go:168] "Request Body" body=""
	I1210 06:31:34.878028  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:34.878374  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:34.878438  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:35.378226  401365 type.go:168] "Request Body" body=""
	I1210 06:31:35.378306  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:35.378649  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:35.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:31:35.878471  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:35.878748  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:36.378551  401365 type.go:168] "Request Body" body=""
	I1210 06:31:36.378631  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:36.378948  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:36.877548  401365 type.go:168] "Request Body" body=""
	I1210 06:31:36.877626  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:36.877995  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:37.378404  401365 type.go:168] "Request Body" body=""
	I1210 06:31:37.378472  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:37.378739  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:37.378783  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:37.878571  401365 type.go:168] "Request Body" body=""
	I1210 06:31:37.878646  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:37.878969  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:38.378416  401365 type.go:168] "Request Body" body=""
	I1210 06:31:38.378491  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:38.378834  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:38.878423  401365 type.go:168] "Request Body" body=""
	I1210 06:31:38.878499  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:38.878770  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:39.378611  401365 type.go:168] "Request Body" body=""
	I1210 06:31:39.378694  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:39.379044  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:39.379105  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:39.878018  401365 type.go:168] "Request Body" body=""
	I1210 06:31:39.878102  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:39.878461  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:40.378264  401365 type.go:168] "Request Body" body=""
	I1210 06:31:40.378348  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:40.378617  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:40.878427  401365 type.go:168] "Request Body" body=""
	I1210 06:31:40.878510  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:40.878851  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:41.377583  401365 type.go:168] "Request Body" body=""
	I1210 06:31:41.377658  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:41.377991  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:41.877560  401365 type.go:168] "Request Body" body=""
	I1210 06:31:41.877633  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:41.877903  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:41.877948  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:42.377649  401365 type.go:168] "Request Body" body=""
	I1210 06:31:42.377727  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:42.378093  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:42.877664  401365 type.go:168] "Request Body" body=""
	I1210 06:31:42.877739  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:42.878032  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:43.378436  401365 type.go:168] "Request Body" body=""
	I1210 06:31:43.378507  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:43.378831  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:43.878454  401365 type.go:168] "Request Body" body=""
	I1210 06:31:43.878546  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:43.878900  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:43.878962  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:44.378527  401365 type.go:168] "Request Body" body=""
	I1210 06:31:44.378608  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:44.378911  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:44.877852  401365 type.go:168] "Request Body" body=""
	I1210 06:31:44.877944  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:44.878230  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:45.377683  401365 type.go:168] "Request Body" body=""
	I1210 06:31:45.377757  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:45.378232  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:45.877964  401365 type.go:168] "Request Body" body=""
	I1210 06:31:45.878060  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:45.878412  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:46.378182  401365 type.go:168] "Request Body" body=""
	I1210 06:31:46.378267  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.378573  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:46.378621  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:46.878427  401365 type.go:168] "Request Body" body=""
	I1210 06:31:46.878510  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.878849  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.378554  401365 type.go:168] "Request Body" body=""
	I1210 06:31:47.378637  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.379010  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.878381  401365 type.go:168] "Request Body" body=""
	I1210 06:31:47.878453  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.878751  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:48.378580  401365 type.go:168] "Request Body" body=""
	I1210 06:31:48.378660  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.378984  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:48.379037  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:48.877565  401365 type.go:168] "Request Body" body=""
	I1210 06:31:48.877642  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.877972  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.378371  401365 type.go:168] "Request Body" body=""
	I1210 06:31:49.378448  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.378712  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.878386  401365 type.go:168] "Request Body" body=""
	I1210 06:31:49.878461  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.878790  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:50.377587  401365 type.go:168] "Request Body" body=""
	I1210 06:31:50.377673  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.378035  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:50.878395  401365 type.go:168] "Request Body" body=""
	I1210 06:31:50.878469  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.878754  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:50.878808  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:51.378548  401365 type.go:168] "Request Body" body=""
	I1210 06:31:51.378627  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.378976  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:51.877595  401365 type.go:168] "Request Body" body=""
	I1210 06:31:51.877677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.878064  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:52.378358  401365 type.go:168] "Request Body" body=""
	I1210 06:31:52.378433  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.378695  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:52.878474  401365 type.go:168] "Request Body" body=""
	I1210 06:31:52.878551  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.878895  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:52.878957  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:53.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:31:53.377721  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.378047  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:53.877607  401365 type.go:168] "Request Body" body=""
	I1210 06:31:53.877682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.878066  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:31:54.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.378069  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.877984  401365 type.go:168] "Request Body" body=""
	I1210 06:31:54.878068  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.878451  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:55.378227  401365 type.go:168] "Request Body" body=""
	I1210 06:31:55.378305  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.378567  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:55.378612  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:55.878449  401365 type.go:168] "Request Body" body=""
	I1210 06:31:55.878524  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.878878  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.377607  401365 type.go:168] "Request Body" body=""
	I1210 06:31:56.377684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.378031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:31:56.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.878731  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:57.378523  401365 type.go:168] "Request Body" body=""
	I1210 06:31:57.378605  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.378963  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:57.379024  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:57.878422  401365 type.go:168] "Request Body" body=""
	I1210 06:31:57.878496  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.878837  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:58.378369  401365 type.go:168] "Request Body" body=""
	I1210 06:31:58.378450  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.378724  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:58.878516  401365 type.go:168] "Request Body" body=""
	I1210 06:31:58.878590  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.878936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:31:59.377756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.378079  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.878003  401365 type.go:168] "Request Body" body=""
	I1210 06:31:59.878079  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.878346  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:59.878388  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:00.378620  401365 type.go:168] "Request Body" body=""
	I1210 06:32:00.378720  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.379187  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:00.877753  401365 type.go:168] "Request Body" body=""
	I1210 06:32:00.877830  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.878187  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.377623  401365 type.go:168] "Request Body" body=""
	I1210 06:32:01.377694  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.377960  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.877717  401365 type.go:168] "Request Body" body=""
	I1210 06:32:01.877791  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.878137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:02.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:32:02.377750  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.378099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:02.378152  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:02.878416  401365 type.go:168] "Request Body" body=""
	I1210 06:32:02.878493  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.878764  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.378615  401365 type.go:168] "Request Body" body=""
	I1210 06:32:03.378694  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.379016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.877719  401365 type.go:168] "Request Body" body=""
	I1210 06:32:03.877801  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.878168  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:04.377604  401365 type.go:168] "Request Body" body=""
	I1210 06:32:04.377682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.378022  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:04.878029  401365 type.go:168] "Request Body" body=""
	I1210 06:32:04.878113  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.878426  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:04.878477  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:05.378217  401365 type.go:168] "Request Body" body=""
	I1210 06:32:05.378293  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.378623  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:05.878242  401365 type.go:168] "Request Body" body=""
	I1210 06:32:05.878313  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.878586  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:06.378446  401365 type.go:168] "Request Body" body=""
	I1210 06:32:06.378528  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.378861  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:06.877578  401365 type.go:168] "Request Body" body=""
	I1210 06:32:06.877651  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.877991  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:07.378348  401365 type.go:168] "Request Body" body=""
	I1210 06:32:07.378430  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.378696  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:07.378747  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:07.878485  401365 type.go:168] "Request Body" body=""
	I1210 06:32:07.878566  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.878891  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.377683  401365 type.go:168] "Request Body" body=""
	I1210 06:32:08.377758  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.378068  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.877617  401365 type.go:168] "Request Body" body=""
	I1210 06:32:08.877686  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.877996  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:09.377667  401365 type.go:168] "Request Body" body=""
	I1210 06:32:09.377749  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.378070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:09.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:32:09.878526  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.878847  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:09.878895  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:10.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:32:10.377695  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.377992  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:10.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:32:10.877747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.878107  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:11.377752  401365 type.go:168] "Request Body" body=""
	I1210 06:32:11.377832  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.378194  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:11.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:32:11.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.878721  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:12.378536  401365 type.go:168] "Request Body" body=""
	I1210 06:32:12.378609  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.379037  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:12.379094  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:12.877634  401365 type.go:168] "Request Body" body=""
	I1210 06:32:12.877718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.878024  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:13.377615  401365 type.go:168] "Request Body" body=""
	I1210 06:32:13.377684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.377949  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:13.877636  401365 type.go:168] "Request Body" body=""
	I1210 06:32:13.877713  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.878091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.377642  401365 type.go:168] "Request Body" body=""
	I1210 06:32:14.377717  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.378074  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.877991  401365 type.go:168] "Request Body" body=""
	I1210 06:32:14.878073  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.878405  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:14.878468  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:15.378244  401365 type.go:168] "Request Body" body=""
	I1210 06:32:15.378316  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.378669  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:15.878506  401365 type.go:168] "Request Body" body=""
	I1210 06:32:15.878598  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.878952  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:16.378402  401365 type.go:168] "Request Body" body=""
	I1210 06:32:16.378473  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.378735  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:16.878581  401365 type.go:168] "Request Body" body=""
	I1210 06:32:16.878668  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.879029  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:16.879085  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:17.377664  401365 type.go:168] "Request Body" body=""
	I1210 06:32:17.377738  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.378065  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:17.877603  401365 type.go:168] "Request Body" body=""
	I1210 06:32:17.877677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.877943  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:18.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:32:18.377754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.378106  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:18.877827  401365 type.go:168] "Request Body" body=""
	I1210 06:32:18.877917  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.878299  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:19.377981  401365 type.go:168] "Request Body" body=""
	I1210 06:32:19.378062  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.378390  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:19.378451  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:19.878242  401365 type.go:168] "Request Body" body=""
	I1210 06:32:19.878318  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.878664  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.377555  401365 type.go:168] "Request Body" body=""
	I1210 06:32:20.377633  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.377966  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.877592  401365 type.go:168] "Request Body" body=""
	I1210 06:32:20.877663  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.878022  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:21.377596  401365 type.go:168] "Request Body" body=""
	I1210 06:32:21.377677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.377971  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:21.877671  401365 type.go:168] "Request Body" body=""
	I1210 06:32:21.877747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.878078  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:21.878135  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:22.377603  401365 type.go:168] "Request Body" body=""
	I1210 06:32:22.377681  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.377998  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:22.877713  401365 type.go:168] "Request Body" body=""
	I1210 06:32:22.877789  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.878146  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.378586  401365 type.go:168] "Request Body" body=""
	I1210 06:32:23.378663  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.379023  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.877627  401365 type.go:168] "Request Body" body=""
	I1210 06:32:23.877698  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.878027  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:24.377671  401365 type.go:168] "Request Body" body=""
	I1210 06:32:24.377755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.378140  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:24.378210  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:24.878158  401365 type.go:168] "Request Body" body=""
	I1210 06:32:24.878240  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.878611  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:25.378254  401365 type.go:168] "Request Body" body=""
	I1210 06:32:25.378329  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.378601  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:25.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:32:25.878460  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.878767  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:26.378460  401365 type.go:168] "Request Body" body=""
	I1210 06:32:26.378534  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.378923  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:26.378977  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:26.878379  401365 type.go:168] "Request Body" body=""
	I1210 06:32:26.878453  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.878804  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:27.378593  401365 type.go:168] "Request Body" body=""
	I1210 06:32:27.378674  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.379034  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:27.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:32:27.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.878091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.378401  401365 type.go:168] "Request Body" body=""
	I1210 06:32:28.378470  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.378735  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.878509  401365 type.go:168] "Request Body" body=""
	I1210 06:32:28.878592  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.878904  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:28.878959  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:29.377676  401365 type.go:168] "Request Body" body=""
	I1210 06:32:29.377758  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.378099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:29.877932  401365 type.go:168] "Request Body" body=""
	I1210 06:32:29.878011  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.878331  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.378437  401365 type.go:168] "Request Body" body=""
	I1210 06:32:30.378520  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.378881  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.877601  401365 type.go:168] "Request Body" body=""
	I1210 06:32:30.877679  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.877997  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:31.378413  401365 type.go:168] "Request Body" body=""
	I1210 06:32:31.378485  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.378800  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:31.378859  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:31.877545  401365 type.go:168] "Request Body" body=""
	I1210 06:32:31.877620  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.877962  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.377685  401365 type.go:168] "Request Body" body=""
	I1210 06:32:32.377765  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.378115  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.878382  401365 type.go:168] "Request Body" body=""
	I1210 06:32:32.878458  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.878718  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:33.378533  401365 type.go:168] "Request Body" body=""
	I1210 06:32:33.378613  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.378973  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:33.379031  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:33.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:32:33.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.878099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:34.377573  401365 type.go:168] "Request Body" body=""
	I1210 06:32:34.377644  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.377911  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:34.877902  401365 type.go:168] "Request Body" body=""
	I1210 06:32:34.877978  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.878339  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.378057  401365 type.go:168] "Request Body" body=""
	I1210 06:32:35.378143  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.378506  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.878224  401365 type.go:168] "Request Body" body=""
	I1210 06:32:35.878295  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.878562  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:35.878604  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:36.378404  401365 type.go:168] "Request Body" body=""
	I1210 06:32:36.378487  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.378840  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:36.877571  401365 type.go:168] "Request Body" body=""
	I1210 06:32:36.877653  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.877994  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.378346  401365 type.go:168] "Request Body" body=""
	I1210 06:32:37.378421  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.378684  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.878461  401365 type.go:168] "Request Body" body=""
	I1210 06:32:37.878543  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.878890  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:37.878952  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:38.378573  401365 type.go:168] "Request Body" body=""
	I1210 06:32:38.378654  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.378951  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:38.878358  401365 type.go:168] "Request Body" body=""
	I1210 06:32:38.878428  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.878691  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.378473  401365 type.go:168] "Request Body" body=""
	I1210 06:32:39.378552  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.378939  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.877654  401365 type.go:168] "Request Body" body=""
	I1210 06:32:39.877738  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.878074  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:40.377853  401365 type.go:168] "Request Body" body=""
	I1210 06:32:40.377926  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.378227  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:40.378275  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:40.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:32:40.877741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.878110  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.377673  401365 type.go:168] "Request Body" body=""
	I1210 06:32:41.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.378080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.878456  401365 type.go:168] "Request Body" body=""
	I1210 06:32:41.878528  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.878849  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:32:42.377701  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.378097  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.877683  401365 type.go:168] "Request Body" body=""
	I1210 06:32:42.877759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.878128  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:42.878186  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:43.378375  401365 type.go:168] "Request Body" body=""
	I1210 06:32:43.378451  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.378720  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:43.878495  401365 type.go:168] "Request Body" body=""
	I1210 06:32:43.878566  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.878911  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.378610  401365 type.go:168] "Request Body" body=""
	I1210 06:32:44.378700  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.379090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.877962  401365 type.go:168] "Request Body" body=""
	I1210 06:32:44.878030  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.878300  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:44.878343  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:45.377682  401365 type.go:168] "Request Body" body=""
	I1210 06:32:45.377763  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.378114  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:45.877818  401365 type.go:168] "Request Body" body=""
	I1210 06:32:45.877892  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.878234  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.377583  401365 type.go:168] "Request Body" body=""
	I1210 06:32:46.377660  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.377917  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:32:46.877751  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.878148  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:47.377793  401365 type.go:168] "Request Body" body=""
	I1210 06:32:47.377870  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.378225  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:47.378277  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:47.877609  401365 type.go:168] "Request Body" body=""
	I1210 06:32:47.877689  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.877999  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:48.377617  401365 type.go:168] "Request Body" body=""
	I1210 06:32:48.377714  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.378121  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:48.877709  401365 type.go:168] "Request Body" body=""
	I1210 06:32:48.877795  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.878141  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.377627  401365 type.go:168] "Request Body" body=""
	I1210 06:32:49.377713  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.378005  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.878006  401365 type.go:168] "Request Body" body=""
	I1210 06:32:49.878085  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.878433  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:49.878488  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:50.378322  401365 type.go:168] "Request Body" body=""
	I1210 06:32:50.378398  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.378718  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:50.878347  401365 type.go:168] "Request Body" body=""
	I1210 06:32:50.878420  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.878687  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.378558  401365 type.go:168] "Request Body" body=""
	I1210 06:32:51.378630  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.378973  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:32:51.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.878061  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:52.377607  401365 type.go:168] "Request Body" body=""
	I1210 06:32:52.377682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.377965  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:52.378014  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:52.877660  401365 type.go:168] "Request Body" body=""
	I1210 06:32:52.877736  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.878070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:53.377675  401365 type.go:168] "Request Body" body=""
	I1210 06:32:53.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.378128  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:53.878388  401365 type.go:168] "Request Body" body=""
	I1210 06:32:53.878462  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.878725  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:54.378466  401365 type.go:168] "Request Body" body=""
	I1210 06:32:54.378536  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.378857  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:54.378913  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:54.877680  401365 type.go:168] "Request Body" body=""
	I1210 06:32:54.877756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.878119  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:55.378458  401365 type.go:168] "Request Body" body=""
	I1210 06:32:55.378526  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.378782  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:55.878544  401365 type.go:168] "Request Body" body=""
	I1210 06:32:55.878626  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.878951  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.377665  401365 type.go:168] "Request Body" body=""
	I1210 06:32:56.377741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.378096  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.878361  401365 type.go:168] "Request Body" body=""
	I1210 06:32:56.878436  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.878736  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:56.878785  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:57.377545  401365 type.go:168] "Request Body" body=""
	I1210 06:32:57.377621  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.377956  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:57.877652  401365 type.go:168] "Request Body" body=""
	I1210 06:32:57.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.878070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.377628  401365 type.go:168] "Request Body" body=""
	I1210 06:32:58.377706  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.378022  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.877657  401365 type.go:168] "Request Body" body=""
	I1210 06:32:58.877735  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.878058  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:59.377658  401365 type.go:168] "Request Body" body=""
	I1210 06:32:59.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.378092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:59.378152  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:59.877990  401365 type.go:168] "Request Body" body=""
	I1210 06:32:59.878106  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.878540  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:00.378642  401365 type.go:168] "Request Body" body=""
	I1210 06:33:00.378734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.379157  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:00.877676  401365 type.go:168] "Request Body" body=""
	I1210 06:33:00.877756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.878108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:01.377615  401365 type.go:168] "Request Body" body=""
	I1210 06:33:01.377687  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.377982  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:01.877579  401365 type.go:168] "Request Body" body=""
	I1210 06:33:01.877659  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.877980  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:01.878035  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:02.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:33:02.377769  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.378112  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:02.877622  401365 type.go:168] "Request Body" body=""
	I1210 06:33:02.877696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.878019  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.378424  401365 type.go:168] "Request Body" body=""
	I1210 06:33:03.378503  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.378851  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.877594  401365 type.go:168] "Request Body" body=""
	I1210 06:33:03.877673  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.878031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:03.878095  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:04.377623  401365 type.go:168] "Request Body" body=""
	I1210 06:33:04.377695  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.378016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:04.878008  401365 type.go:168] "Request Body" body=""
	I1210 06:33:04.878082  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.878402  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.378189  401365 type.go:168] "Request Body" body=""
	I1210 06:33:05.378264  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.378599  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.878376  401365 type.go:168] "Request Body" body=""
	I1210 06:33:05.878455  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.878734  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:05.878779  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:06.378572  401365 type.go:168] "Request Body" body=""
	I1210 06:33:06.378660  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.379002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:06.877676  401365 type.go:168] "Request Body" body=""
	I1210 06:33:06.877754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.878110  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.378442  401365 type.go:168] "Request Body" body=""
	I1210 06:33:07.378521  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.378800  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.877549  401365 type.go:168] "Request Body" body=""
	I1210 06:33:07.877629  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.878000  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:08.377709  401365 type.go:168] "Request Body" body=""
	I1210 06:33:08.377785  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.378149  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:08.378206  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:08.877866  401365 type.go:168] "Request Body" body=""
	I1210 06:33:08.877938  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.878266  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.377997  401365 type.go:168] "Request Body" body=""
	I1210 06:33:09.378074  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.378430  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.878278  401365 type.go:168] "Request Body" body=""
	I1210 06:33:09.878362  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.878709  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:10.378535  401365 type.go:168] "Request Body" body=""
	I1210 06:33:10.378614  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.378892  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:10.378949  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:10.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:10.877696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.878045  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:11.377636  401365 type.go:168] "Request Body" body=""
	I1210 06:33:11.377715  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.378069  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:11.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:33:11.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.878741  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:12.378537  401365 type.go:168] "Request Body" body=""
	I1210 06:33:12.378621  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.378959  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:12.379018  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:12.877685  401365 type.go:168] "Request Body" body=""
	I1210 06:33:12.877767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.878108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:13.377595  401365 type.go:168] "Request Body" body=""
	I1210 06:33:13.377667  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.377991  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:13.877696  401365 type.go:168] "Request Body" body=""
	I1210 06:33:13.877788  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.878233  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.377670  401365 type.go:168] "Request Body" body=""
	I1210 06:33:14.377745  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.378062  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.878087  401365 type.go:168] "Request Body" body=""
	I1210 06:33:14.878167  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.878437  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:14.878481  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:15.378338  401365 type.go:168] "Request Body" body=""
	I1210 06:33:15.378427  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.378799  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:15.877556  401365 type.go:168] "Request Body" body=""
	I1210 06:33:15.877630  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.878034  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:16.378366  401365 type.go:168] "Request Body" body=""
	I1210 06:33:16.378435  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.378773  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:16.878569  401365 type.go:168] "Request Body" body=""
	I1210 06:33:16.878643  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.879012  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:16.879074  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:17.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:33:17.377747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.378122  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:17.877820  401365 type.go:168] "Request Body" body=""
	I1210 06:33:17.877897  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.878175  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:18.377654  401365 type.go:168] "Request Body" body=""
	I1210 06:33:18.377727  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.378073  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:18.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:33:18.877754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.878137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:19.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:33:19.377682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.377977  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:19.378029  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:19.877848  401365 type.go:168] "Request Body" body=""
	I1210 06:33:19.877930  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.878248  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:20.378064  401365 type.go:168] "Request Body" body=""
	I1210 06:33:20.378150  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.378561  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:20.878476  401365 type.go:168] "Request Body" body=""
	I1210 06:33:20.878552  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.878835  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:21.377583  401365 type.go:168] "Request Body" body=""
	I1210 06:33:21.377658  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.378029  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:21.378094  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:21.877680  401365 type.go:168] "Request Body" body=""
	I1210 06:33:21.877755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.878122  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:22.378420  401365 type.go:168] "Request Body" body=""
	I1210 06:33:22.378487  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.378808  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:22.877547  401365 type.go:168] "Request Body" body=""
	I1210 06:33:22.877625  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.877980  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:23.377731  401365 type.go:168] "Request Body" body=""
	I1210 06:33:23.377812  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.378164  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:23.378221  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:23.877756  401365 type.go:168] "Request Body" body=""
	I1210 06:33:23.877825  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.878080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:24.377759  401365 type.go:168] "Request Body" body=""
	I1210 06:33:24.377846  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.378207  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:24.878036  401365 type.go:168] "Request Body" body=""
	I1210 06:33:24.878119  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.878474  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:25.378280  401365 type.go:168] "Request Body" body=""
	I1210 06:33:25.378375  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.378683  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:25.378744  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:25.878089  401365 type.go:168] "Request Body" body=""
	I1210 06:33:25.878190  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.878571  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:26.378247  401365 type.go:168] "Request Body" body=""
	I1210 06:33:26.378325  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.378653  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:26.878389  401365 type.go:168] "Request Body" body=""
	I1210 06:33:26.878457  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.878720  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:27.378526  401365 type.go:168] "Request Body" body=""
	I1210 06:33:27.378607  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.378943  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:27.379002  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:27.877690  401365 type.go:168] "Request Body" body=""
	I1210 06:33:27.877775  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.878121  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:28.377561  401365 type.go:168] "Request Body" body=""
	I1210 06:33:28.377635  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.377959  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:28.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:33:28.877750  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.878089  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.378437  401365 type.go:168] "Request Body" body=""
	I1210 06:33:29.378518  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.378867  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:29.877685  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.877995  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:29.878058  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:30.377631  401365 type.go:168] "Request Body" body=""
	I1210 06:33:30.377707  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.378042  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:30.877750  401365 type.go:168] "Request Body" body=""
	I1210 06:33:30.877827  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.878132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:31.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:33:31.377692  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.377951  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:31.877635  401365 type.go:168] "Request Body" body=""
	I1210 06:33:31.877717  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.878049  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:31.878116  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:32.377678  401365 type.go:168] "Request Body" body=""
	I1210 06:33:32.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.378103  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:32.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:32.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.877995  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:33.377756  401365 type.go:168] "Request Body" body=""
	I1210 06:33:33.377840  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.378198  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:33.877915  401365 type.go:168] "Request Body" body=""
	I1210 06:33:33.877993  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.878332  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:33.878392  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:34.377635  401365 type.go:168] "Request Body" body=""
	I1210 06:33:34.377727  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.378085  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:34.878096  401365 type.go:168] "Request Body" body=""
	I1210 06:33:34.878177  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.878550  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:35.378207  401365 type.go:168] "Request Body" body=""
	I1210 06:33:35.378280  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.378622  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:35.878407  401365 type.go:168] "Request Body" body=""
	I1210 06:33:35.878481  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.878777  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:35.878833  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:36.378544  401365 type.go:168] "Request Body" body=""
	I1210 06:33:36.378618  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.378979  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:36.877667  401365 type.go:168] "Request Body" body=""
	I1210 06:33:36.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.878064  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.377603  401365 type.go:168] "Request Body" body=""
	I1210 06:33:37.377674  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.377946  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.877690  401365 type.go:168] "Request Body" body=""
	I1210 06:33:37.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.878181  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:38.377888  401365 type.go:168] "Request Body" body=""
	I1210 06:33:38.377973  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.378298  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:38.378347  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:38.877651  401365 type.go:168] "Request Body" body=""
	I1210 06:33:38.877756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.878041  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:33:39.377755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.378094  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.877930  401365 type.go:168] "Request Body" body=""
	I1210 06:33:39.878008  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.878344  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:40.378300  401365 type.go:168] "Request Body" body=""
	I1210 06:33:40.378366  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.378615  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:40.378657  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:40.878469  401365 type.go:168] "Request Body" body=""
	I1210 06:33:40.878546  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.878897  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:41.378609  401365 type.go:168] "Request Body" body=""
	I1210 06:33:41.378684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.379020  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:41.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:41.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.878041  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:42.377673  401365 type.go:168] "Request Body" body=""
	I1210 06:33:42.377747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.378116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:42.877854  401365 type.go:168] "Request Body" body=""
	I1210 06:33:42.877940  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.878285  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:42.878351  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:43.377678  401365 type.go:168] "Request Body" body=""
	I1210 06:33:43.377746  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.378043  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:43.877664  401365 type.go:168] "Request Body" body=""
	I1210 06:33:43.877741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.878068  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:44.377646  401365 type.go:168] "Request Body" body=""
	I1210 06:33:44.377729  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.378109  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:44.877931  401365 type.go:168] "Request Body" body=""
	I1210 06:33:44.878000  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.878273  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:45.377683  401365 type.go:168] "Request Body" body=""
	I1210 06:33:45.377768  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.378162  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:45.378230  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:45.877646  401365 type.go:168] "Request Body" body=""
	I1210 06:33:45.877726  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.878079  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:46.378365  401365 type.go:168] "Request Body" body=""
	I1210 06:33:46.378443  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.378778  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:46.878592  401365 type.go:168] "Request Body" body=""
	I1210 06:33:46.878667  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.879016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.377612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:47.377693  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.378037  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.878404  401365 type.go:168] "Request Body" body=""
	I1210 06:33:47.878481  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.878791  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:47.878833  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:48.378603  401365 type.go:168] "Request Body" body=""
	I1210 06:33:48.378679  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.379038  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:48.877634  401365 type.go:168] "Request Body" body=""
	I1210 06:33:48.877710  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.878058  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:49.377585  401365 type.go:168] "Request Body" body=""
	I1210 06:33:49.377661  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.377929  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:49.877952  401365 type.go:168] "Request Body" body=""
	I1210 06:33:49.878030  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.878370  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:50.378433  401365 type.go:168] "Request Body" body=""
	I1210 06:33:50.378512  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.378851  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:50.378908  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:50.878409  401365 type.go:168] "Request Body" body=""
	I1210 06:33:50.878484  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.878745  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:51.378528  401365 type.go:168] "Request Body" body=""
	I1210 06:33:51.378608  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.378930  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:51.877680  401365 type.go:168] "Request Body" body=""
	I1210 06:33:51.877772  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.878121  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:33:52.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.378042  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.877736  401365 type.go:168] "Request Body" body=""
	I1210 06:33:52.877859  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.878200  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:52.878263  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:53.377750  401365 type.go:168] "Request Body" body=""
	I1210 06:33:53.377829  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.378164  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:53.878375  401365 type.go:168] "Request Body" body=""
	I1210 06:33:53.878447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.878711  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.378552  401365 type.go:168] "Request Body" body=""
	I1210 06:33:54.378627  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.378978  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.877937  401365 type.go:168] "Request Body" body=""
	I1210 06:33:54.878016  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.878372  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:54.878426  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:55.377557  401365 type.go:168] "Request Body" body=""
	I1210 06:33:55.377627  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.377890  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:55.877581  401365 type.go:168] "Request Body" body=""
	I1210 06:33:55.877661  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.878044  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:56.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:33:56.377687  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.378031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:56.878382  401365 type.go:168] "Request Body" body=""
	I1210 06:33:56.878463  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.878747  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:56.878792  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:57.378563  401365 type.go:168] "Request Body" body=""
	I1210 06:33:57.378655  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.379048  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:57.878429  401365 type.go:168] "Request Body" body=""
	I1210 06:33:57.878505  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.878838  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:58.378383  401365 type.go:168] "Request Body" body=""
	I1210 06:33:58.378457  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.378729  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:58.878537  401365 type.go:168] "Request Body" body=""
	I1210 06:33:58.878615  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.878961  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:58.879020  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:59.377698  401365 type.go:168] "Request Body" body=""
	I1210 06:33:59.377777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.378091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:59.877943  401365 type.go:168] "Request Body" body=""
	I1210 06:33:59.878015  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.878285  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:00.388459  401365 type.go:168] "Request Body" body=""
	I1210 06:34:00.388551  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.388936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:00.877633  401365 type.go:168] "Request Body" body=""
	I1210 06:34:00.877711  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.878072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:01.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:34:01.377692  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.377964  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:01.378006  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:01.877703  401365 type.go:168] "Request Body" body=""
	I1210 06:34:01.877777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.878116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.377805  401365 type.go:168] "Request Body" body=""
	I1210 06:34:02.377886  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.378243  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.877793  401365 type.go:168] "Request Body" body=""
	I1210 06:34:02.877861  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.878116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:03.377724  401365 type.go:168] "Request Body" body=""
	I1210 06:34:03.377819  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.378176  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:03.378248  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:03.877926  401365 type.go:168] "Request Body" body=""
	I1210 06:34:03.877998  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.878340  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.378166  401365 type.go:168] "Request Body" body=""
	I1210 06:34:04.378243  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.378539  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.878398  401365 type.go:168] "Request Body" body=""
	I1210 06:34:04.878479  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.878809  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:05.378551  401365 type.go:168] "Request Body" body=""
	I1210 06:34:05.378630  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.379127  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:05.379181  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:05.877595  401365 type.go:168] "Request Body" body=""
	I1210 06:34:05.877669  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.877928  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.377667  401365 type.go:168] "Request Body" body=""
	I1210 06:34:06.377742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.378094  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.877685  401365 type.go:168] "Request Body" body=""
	I1210 06:34:06.877764  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.878112  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.378383  401365 type.go:168] "Request Body" body=""
	I1210 06:34:07.378451  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.378722  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.878478  401365 type.go:168] "Request Body" body=""
	I1210 06:34:07.878553  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.878918  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:07.878972  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:08.378592  401365 type.go:168] "Request Body" body=""
	I1210 06:34:08.378675  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.379031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:08.877609  401365 type.go:168] "Request Body" body=""
	I1210 06:34:08.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.877968  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.377659  401365 type.go:168] "Request Body" body=""
	I1210 06:34:09.377734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.378072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.877922  401365 type.go:168] "Request Body" body=""
	I1210 06:34:09.878005  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.878441  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:10.378514  401365 type.go:168] "Request Body" body=""
	I1210 06:34:10.378590  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.378890  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:10.378934  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:10.877619  401365 type.go:168] "Request Body" body=""
	I1210 06:34:10.877709  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.878026  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:11.377616  401365 type.go:168] "Request Body" body=""
	I1210 06:34:11.377679  401365 node_ready.go:38] duration metric: took 6m0.000247895s for node "functional-253997" to be "Ready" ...
	I1210 06:34:11.380832  401365 out.go:203] 
	W1210 06:34:11.383623  401365 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 06:34:11.383641  401365 out.go:285] * 
	W1210 06:34:11.385783  401365 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:34:11.388549  401365 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 06:34:20 functional-253997 crio[6019]: time="2025-12-10T06:34:20.226234681Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a708ff60-bcea-483c-b679-ca4b4043100c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.320917933Z" level=info msg="Checking image status: minikube-local-cache-test:functional-253997" id=8cf16b14-ad6c-4516-86f9-efc2d004c46d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.32123257Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.321307328Z" level=info msg="Image minikube-local-cache-test:functional-253997 not found" id=8cf16b14-ad6c-4516-86f9-efc2d004c46d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.321416589Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-253997 found" id=8cf16b14-ad6c-4516-86f9-efc2d004c46d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.35050104Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-253997" id=723203b4-8d87-467a-93f7-8a39c29b88e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.350679963Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-253997 not found" id=723203b4-8d87-467a-93f7-8a39c29b88e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.350723106Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-253997 found" id=723203b4-8d87-467a-93f7-8a39c29b88e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.380319293Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-253997" id=a6b043ee-ff17-4b7e-a039-3a7f7e9eb2ef name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.380461883Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-253997 not found" id=a6b043ee-ff17-4b7e-a039-3a7f7e9eb2ef name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.38050327Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-253997 found" id=a6b043ee-ff17-4b7e-a039-3a7f7e9eb2ef name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:22 functional-253997 crio[6019]: time="2025-12-10T06:34:22.426976539Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=67fa22eb-6c70-41dd-bbb9-9c421c692d3d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:22 functional-253997 crio[6019]: time="2025-12-10T06:34:22.752414901Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1c569856-c3e8-4856-8110-7045b84de2ec name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:22 functional-253997 crio[6019]: time="2025-12-10T06:34:22.752561069Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=1c569856-c3e8-4856-8110-7045b84de2ec name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:22 functional-253997 crio[6019]: time="2025-12-10T06:34:22.752598329Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=1c569856-c3e8-4856-8110-7045b84de2ec name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.316751862Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=cc2e7227-fcb5-4e0b-b647-12314d70d789 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.316887905Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=cc2e7227-fcb5-4e0b-b647-12314d70d789 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.316926034Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=cc2e7227-fcb5-4e0b-b647-12314d70d789 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.360744336Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=0ea1486a-0d89-4a21-929d-c4be9efea554 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.360877589Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=0ea1486a-0d89-4a21-929d-c4be9efea554 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.360926697Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=0ea1486a-0d89-4a21-929d-c4be9efea554 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.388284672Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5c436c91-3a6e-45fc-b84a-f452aff3ecbe name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.388443919Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=5c436c91-3a6e-45fc-b84a-f452aff3ecbe name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.388496293Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=5c436c91-3a6e-45fc-b84a-f452aff3ecbe name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.911412131Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f800cde8-651a-4555-a685-1c738a5e3283 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:34:25.498684   10020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:25.499290   10020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:25.501283   10020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:25.501877   10020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:25.503511   10020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 03:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015165] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514331] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032961] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807794] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.310854] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 04:51] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000001f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000bd7dad86{9P.session} n=00000000db0bc116
	[  +0.001142] FS-Cache: O-key=[10] '34323936323939323530'
	[  +0.000773] FS-Cache: N-cookie c=00000020 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000bd7dad86{9P.session} n=00000000040fb2e3
	[  +0.001079] FS-Cache: N-key=[10] '34323936323939323530'
	[Dec10 04:56] hrtimer: interrupt took 8286122 ns
	[Dec10 06:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 06:10] overlayfs: idmapped layers are currently not supported
	[ +21.611451] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 06:15] overlayfs: idmapped layers are currently not supported
	[Dec10 06:16] overlayfs: idmapped layers are currently not supported
	[Dec10 06:19] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:34:25 up  3:16,  0 user,  load average: 0.28, 0.29, 0.81
	Linux functional-253997 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:34:23 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:34:23 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1155.
	Dec 10 06:34:23 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:23 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:23 functional-253997 kubelet[9901]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:23 functional-253997 kubelet[9901]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:23 functional-253997 kubelet[9901]: E1210 06:34:23.940995    9901 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:34:23 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:34:23 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:34:24 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Dec 10 06:34:24 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:24 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:24 functional-253997 kubelet[9931]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:24 functional-253997 kubelet[9931]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:24 functional-253997 kubelet[9931]: E1210 06:34:24.693022    9931 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:34:24 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:34:24 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:34:25 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1157.
	Dec 10 06:34:25 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:25 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:25 functional-253997 kubelet[10003]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:25 functional-253997 kubelet[10003]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:25 functional-253997 kubelet[10003]: E1210 06:34:25.437643   10003 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:34:25 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:34:25 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997: exit status 2 (334.910795ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-253997" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (2.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (2.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-253997 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-253997 get pods: exit status 1 (108.218093ms)

                                                
                                                
** stderr ** 
	E1210 06:34:26.698114  407017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:34:26.698468  407017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:34:26.699879  407017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:34:26.700157  407017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:34:26.701521  407017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-253997 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-253997
helpers_test.go:244: (dbg) docker inspect functional-253997:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	        "Created": "2025-12-10T06:19:33.832297734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 395175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:19:33.914699563Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hosts",
	        "LogPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7-json.log",
	        "Name": "/functional-253997",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-253997:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-253997",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	                "LowerDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-253997",
	                "Source": "/var/lib/docker/volumes/functional-253997/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-253997",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-253997",
	                "name.minikube.sigs.k8s.io": "functional-253997",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "484c42edbd556e17e2c5a836571d3275bc9cbd157b70c100e08a5b73322a62e6",
	            "SandboxKey": "/var/run/docker/netns/484c42edbd55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-253997": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ba:52:c5:6e:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fc5aadc020d5981cfdab05726de722ae81d653cbc30b66014568069657370fe7",
	                    "EndpointID": "9ecd4882ea5c9fe5581b68ad15a0a2f450e34777aee426fcc158008c81076e5a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-253997",
	                        "256e059cdf1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997: exit status 2 (321.737282ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-253997 logs -n 25: (1.015598105s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-013831 image ls --format json --alsologtostderr                                                                                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image build -t localhost/my-image:functional-013831 testdata/build --alsologtostderr                                          │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls --format table --alsologtostderr                                                                                     │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls                                                                                                                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ delete         │ -p functional-013831                                                                                                                            │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start          │ -p functional-253997 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ start          │ -p functional-253997 --alsologtostderr -v=8                                                                                                     │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:28 UTC │                     │
	│ cache          │ functional-253997 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache add registry.k8s.io/pause:latest                                                                                        │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache add minikube-local-cache-test:functional-253997                                                                         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache delete minikube-local-cache-test:functional-253997                                                                      │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl images                                                                                                        │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │                     │
	│ cache          │ functional-253997 cache reload                                                                                                                  │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ kubectl        │ functional-253997 kubectl -- --context functional-253997 get pods                                                                               │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:28:04
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:28:04.696682  401365 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:28:04.696859  401365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:28:04.696892  401365 out.go:374] Setting ErrFile to fd 2...
	I1210 06:28:04.696914  401365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:28:04.697215  401365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:28:04.697662  401365 out.go:368] Setting JSON to false
	I1210 06:28:04.698567  401365 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11437,"bootTime":1765336648,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:28:04.698673  401365 start.go:143] virtualization:  
	I1210 06:28:04.702443  401365 out.go:179] * [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:28:04.705481  401365 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:28:04.705615  401365 notify.go:221] Checking for updates...
	I1210 06:28:04.711086  401365 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:28:04.713917  401365 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:04.716867  401365 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:28:04.719925  401365 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:28:04.722835  401365 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:28:04.726336  401365 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:28:04.726469  401365 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:28:04.754166  401365 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:28:04.754279  401365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:28:04.810645  401365 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:28:04.801435563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:28:04.810756  401365 docker.go:319] overlay module found
	I1210 06:28:04.813864  401365 out.go:179] * Using the docker driver based on existing profile
	I1210 06:28:04.816769  401365 start.go:309] selected driver: docker
	I1210 06:28:04.816791  401365 start.go:927] validating driver "docker" against &{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:28:04.816907  401365 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:28:04.817028  401365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:28:04.870143  401365 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:28:04.860525891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:28:04.870593  401365 cni.go:84] Creating CNI manager for ""
	I1210 06:28:04.870644  401365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:28:04.870692  401365 start.go:353] cluster config:
	{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:28:04.873854  401365 out.go:179] * Starting "functional-253997" primary control-plane node in "functional-253997" cluster
	I1210 06:28:04.876935  401365 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:28:04.879860  401365 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:28:04.882747  401365 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:28:04.882931  401365 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:28:04.906679  401365 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:28:04.906698  401365 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:28:04.939349  401365 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 06:28:05.106989  401365 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 06:28:05.107216  401365 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/config.json ...
	I1210 06:28:05.107505  401365 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:28:05.107566  401365 start.go:360] acquireMachinesLock for functional-253997: {Name:mkd4a204596bf14d7530a6c5c103756527acdf26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.107643  401365 start.go:364] duration metric: took 39.278µs to acquireMachinesLock for "functional-253997"
	I1210 06:28:05.107681  401365 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:28:05.107701  401365 fix.go:54] fixHost starting: 
	I1210 06:28:05.107821  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:05.108032  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:05.134635  401365 fix.go:112] recreateIfNeeded on functional-253997: state=Running err=<nil>
	W1210 06:28:05.134664  401365 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:28:05.138161  401365 out.go:252] * Updating the running docker "functional-253997" container ...
	I1210 06:28:05.138204  401365 machine.go:94] provisionDockerMachine start ...
	I1210 06:28:05.138290  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.156912  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:05.157271  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:05.157282  401365 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:28:05.272681  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:05.312543  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:28:05.312568  401365 ubuntu.go:182] provisioning hostname "functional-253997"
	I1210 06:28:05.312643  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.337102  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:05.337416  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:05.337433  401365 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-253997 && echo "functional-253997" | sudo tee /etc/hostname
	I1210 06:28:05.435781  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:05.503700  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:28:05.503808  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.525010  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:05.525371  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:05.525395  401365 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-253997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-253997/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-253997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:28:05.596990  401365 cache.go:107] acquiring lock: {Name:mk9268471d41d450748d4b0133d1a043378776f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597093  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:28:05.597107  401365 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 135.879µs
	I1210 06:28:05.597123  401365 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597148  401365 cache.go:107] acquiring lock: {Name:mk8250dc655f821bf2674ed77a7683798ede4f4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597196  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:28:05.597205  401365 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 71.098µs
	I1210 06:28:05.597212  401365 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597224  401365 cache.go:107] acquiring lock: {Name:mk1c0334e51772e4f6b3429cc05b91ec19d54b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597256  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:28:05.597264  401365 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 41.773µs
	I1210 06:28:05.597271  401365 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597286  401365 cache.go:107] acquiring lock: {Name:mk41aaba55a3296a937cf380d44dc9920df69546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597313  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:28:05.597325  401365 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 45.342µs
	I1210 06:28:05.597331  401365 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:28:05.597347  401365 cache.go:107] acquiring lock: {Name:mkc2831727d74e5fadb1c00bd89f0236b385200e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597380  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:28:05.597390  401365 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 49.009µs
	I1210 06:28:05.597395  401365 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:28:05.597404  401365 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597432  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:28:05.597441  401365 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 38.597µs
	I1210 06:28:05.597447  401365 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:28:05.597457  401365 cache.go:107] acquiring lock: {Name:mk7e052900e41419dc78d3311f27db508a8f64b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597487  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:28:05.597494  401365 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.163µs
	I1210 06:28:05.597499  401365 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:28:05.597517  401365 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:28:05.597571  401365 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:28:05.597584  401365 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.023µs
	I1210 06:28:05.597591  401365 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:28:05.597598  401365 cache.go:87] Successfully saved all images to host disk.
	I1210 06:28:05.681682  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:28:05.681708  401365 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-362392/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-362392/.minikube}
	I1210 06:28:05.681741  401365 ubuntu.go:190] setting up certificates
	I1210 06:28:05.681752  401365 provision.go:84] configureAuth start
	I1210 06:28:05.681819  401365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:28:05.699808  401365 provision.go:143] copyHostCerts
	I1210 06:28:05.699863  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 06:28:05.699905  401365 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem, removing ...
	I1210 06:28:05.699919  401365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 06:28:05.699992  401365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem (1078 bytes)
	I1210 06:28:05.700081  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 06:28:05.700104  401365 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem, removing ...
	I1210 06:28:05.700113  401365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 06:28:05.700142  401365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem (1123 bytes)
	I1210 06:28:05.700188  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 06:28:05.700207  401365 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem, removing ...
	I1210 06:28:05.700218  401365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 06:28:05.700242  401365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem (1675 bytes)
	I1210 06:28:05.700300  401365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem org=jenkins.functional-253997 san=[127.0.0.1 192.168.49.2 functional-253997 localhost minikube]
	I1210 06:28:05.936274  401365 provision.go:177] copyRemoteCerts
	I1210 06:28:05.936350  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:28:05.936418  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:05.954560  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.065031  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 06:28:06.065092  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:28:06.082556  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 06:28:06.082620  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:28:06.101057  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 06:28:06.101135  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:28:06.119676  401365 provision.go:87] duration metric: took 437.892883ms to configureAuth
	I1210 06:28:06.119777  401365 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:28:06.119980  401365 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:28:06.120085  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.137920  401365 main.go:143] libmachine: Using SSH client type: native
	I1210 06:28:06.138235  401365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:28:06.138256  401365 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:28:06.452845  401365 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:28:06.452929  401365 machine.go:97] duration metric: took 1.314715304s to provisionDockerMachine
	I1210 06:28:06.452956  401365 start.go:293] postStartSetup for "functional-253997" (driver="docker")
	I1210 06:28:06.452990  401365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:28:06.453063  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:28:06.453144  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.470784  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.577269  401365 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:28:06.580692  401365 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 06:28:06.580715  401365 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 06:28:06.580720  401365 command_runner.go:130] > VERSION_ID="12"
	I1210 06:28:06.580725  401365 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 06:28:06.580730  401365 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 06:28:06.580768  401365 command_runner.go:130] > ID=debian
	I1210 06:28:06.580780  401365 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 06:28:06.580785  401365 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 06:28:06.580791  401365 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 06:28:06.580887  401365 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:28:06.580933  401365 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:28:06.580952  401365 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/addons for local assets ...
	I1210 06:28:06.581012  401365 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/files for local assets ...
	I1210 06:28:06.581098  401365 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> 3642652.pem in /etc/ssl/certs
	I1210 06:28:06.581111  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> /etc/ssl/certs/3642652.pem
	I1210 06:28:06.581203  401365 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts -> hosts in /etc/test/nested/copy/364265
	I1210 06:28:06.581211  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts -> /etc/test/nested/copy/364265/hosts
	I1210 06:28:06.581307  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364265
	I1210 06:28:06.588834  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:28:06.607350  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts --> /etc/test/nested/copy/364265/hosts (40 bytes)
	I1210 06:28:06.625111  401365 start.go:296] duration metric: took 172.118023ms for postStartSetup
	I1210 06:28:06.625251  401365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:28:06.625310  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.643314  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.746089  401365 command_runner.go:130] > 11%
	I1210 06:28:06.746641  401365 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:28:06.751190  401365 command_runner.go:130] > 174G
	I1210 06:28:06.751596  401365 fix.go:56] duration metric: took 1.643890859s for fixHost
	I1210 06:28:06.751620  401365 start.go:83] releasing machines lock for "functional-253997", held for 1.643948944s
	I1210 06:28:06.751695  401365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:28:06.769599  401365 ssh_runner.go:195] Run: cat /version.json
	I1210 06:28:06.769653  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.769923  401365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:28:06.769973  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:06.794205  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.801527  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:06.995023  401365 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 06:28:06.995129  401365 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1210 06:28:06.995269  401365 ssh_runner.go:195] Run: systemctl --version
	I1210 06:28:07.001581  401365 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 06:28:07.001629  401365 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 06:28:07.002099  401365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:28:07.048284  401365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 06:28:07.052994  401365 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 06:28:07.053661  401365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:28:07.053769  401365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:28:07.062754  401365 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:28:07.062818  401365 start.go:496] detecting cgroup driver to use...
	I1210 06:28:07.062869  401365 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:28:07.062946  401365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:28:07.079107  401365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:28:07.094803  401365 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:28:07.094958  401365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:28:07.114470  401365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:28:07.128193  401365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:28:07.258424  401365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:28:07.374265  401365 docker.go:234] disabling docker service ...
	I1210 06:28:07.374339  401365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:28:07.389285  401365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:28:07.403201  401365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:28:07.521904  401365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:28:07.641023  401365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:28:07.653771  401365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:28:07.666535  401365 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1210 06:28:07.667719  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:07.817082  401365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:28:07.817158  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.826426  401365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:28:07.826509  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.835611  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.844530  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.853511  401365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:28:07.861378  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.870726  401365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.879012  401365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:07.888039  401365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:28:07.894740  401365 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 06:28:07.895767  401365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:28:07.903878  401365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:28:08.028500  401365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:28:08.203883  401365 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:28:08.204004  401365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:28:08.207826  401365 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1210 06:28:08.207850  401365 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 06:28:08.207858  401365 command_runner.go:130] > Device: 0,72	Inode: 1753        Links: 1
	I1210 06:28:08.207864  401365 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:28:08.207869  401365 command_runner.go:130] > Access: 2025-12-10 06:28:08.143329752 +0000
	I1210 06:28:08.207875  401365 command_runner.go:130] > Modify: 2025-12-10 06:28:08.143329752 +0000
	I1210 06:28:08.207879  401365 command_runner.go:130] > Change: 2025-12-10 06:28:08.143329752 +0000
	I1210 06:28:08.207883  401365 command_runner.go:130] >  Birth: -
	I1210 06:28:08.207920  401365 start.go:564] Will wait 60s for crictl version
	I1210 06:28:08.207972  401365 ssh_runner.go:195] Run: which crictl
	I1210 06:28:08.211603  401365 command_runner.go:130] > /usr/local/bin/crictl
	I1210 06:28:08.211673  401365 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:28:08.233344  401365 command_runner.go:130] > Version:  0.1.0
	I1210 06:28:08.233366  401365 command_runner.go:130] > RuntimeName:  cri-o
	I1210 06:28:08.233371  401365 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1210 06:28:08.233486  401365 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 06:28:08.235784  401365 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:28:08.235868  401365 ssh_runner.go:195] Run: crio --version
	I1210 06:28:08.263554  401365 command_runner.go:130] > crio version 1.34.3
	I1210 06:28:08.263582  401365 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 06:28:08.263590  401365 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 06:28:08.263598  401365 command_runner.go:130] >    GitTreeState:   dirty
	I1210 06:28:08.263603  401365 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 06:28:08.263609  401365 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 06:28:08.263614  401365 command_runner.go:130] >    Compiler:       gc
	I1210 06:28:08.263618  401365 command_runner.go:130] >    Platform:       linux/arm64
	I1210 06:28:08.263625  401365 command_runner.go:130] >    Linkmode:       static
	I1210 06:28:08.263631  401365 command_runner.go:130] >    BuildTags:
	I1210 06:28:08.263635  401365 command_runner.go:130] >      static
	I1210 06:28:08.263641  401365 command_runner.go:130] >      netgo
	I1210 06:28:08.263644  401365 command_runner.go:130] >      osusergo
	I1210 06:28:08.263649  401365 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 06:28:08.263658  401365 command_runner.go:130] >      seccomp
	I1210 06:28:08.263662  401365 command_runner.go:130] >      apparmor
	I1210 06:28:08.263665  401365 command_runner.go:130] >      selinux
	I1210 06:28:08.263673  401365 command_runner.go:130] >    LDFlags:          unknown
	I1210 06:28:08.263678  401365 command_runner.go:130] >    SeccompEnabled:   true
	I1210 06:28:08.263686  401365 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 06:28:08.265277  401365 ssh_runner.go:195] Run: crio --version
	I1210 06:28:08.292854  401365 command_runner.go:130] > crio version 1.34.3
	I1210 06:28:08.292877  401365 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 06:28:08.292884  401365 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 06:28:08.292894  401365 command_runner.go:130] >    GitTreeState:   dirty
	I1210 06:28:08.292899  401365 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 06:28:08.292903  401365 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 06:28:08.292909  401365 command_runner.go:130] >    Compiler:       gc
	I1210 06:28:08.292914  401365 command_runner.go:130] >    Platform:       linux/arm64
	I1210 06:28:08.292918  401365 command_runner.go:130] >    Linkmode:       static
	I1210 06:28:08.292921  401365 command_runner.go:130] >    BuildTags:
	I1210 06:28:08.292925  401365 command_runner.go:130] >      static
	I1210 06:28:08.292929  401365 command_runner.go:130] >      netgo
	I1210 06:28:08.292932  401365 command_runner.go:130] >      osusergo
	I1210 06:28:08.292936  401365 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 06:28:08.292939  401365 command_runner.go:130] >      seccomp
	I1210 06:28:08.292943  401365 command_runner.go:130] >      apparmor
	I1210 06:28:08.292947  401365 command_runner.go:130] >      selinux
	I1210 06:28:08.292951  401365 command_runner.go:130] >    LDFlags:          unknown
	I1210 06:28:08.292955  401365 command_runner.go:130] >    SeccompEnabled:   true
	I1210 06:28:08.292959  401365 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 06:28:08.297960  401365 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 06:28:08.300955  401365 cli_runner.go:164] Run: docker network inspect functional-253997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:28:08.316701  401365 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:28:08.320890  401365 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 06:28:08.321107  401365 kubeadm.go:884] updating cluster {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:28:08.321383  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:08.467539  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:08.630219  401365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:28:08.778675  401365 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:28:08.778770  401365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:28:08.809702  401365 command_runner.go:130] > {
	I1210 06:28:08.809721  401365 command_runner.go:130] >   "images":  [
	I1210 06:28:08.809725  401365 command_runner.go:130] >     {
	I1210 06:28:08.809734  401365 command_runner.go:130] >       "id":  "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1210 06:28:08.809739  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809744  401365 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 06:28:08.809748  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809753  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809762  401365 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"
	I1210 06:28:08.809765  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809770  401365 command_runner.go:130] >       "size":  "29035622",
	I1210 06:28:08.809784  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.809789  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809792  401365 command_runner.go:130] >     },
	I1210 06:28:08.809795  401365 command_runner.go:130] >     {
	I1210 06:28:08.809802  401365 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 06:28:08.809806  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809812  401365 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 06:28:08.809815  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809819  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809827  401365 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"
	I1210 06:28:08.809830  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809834  401365 command_runner.go:130] >       "size":  "74488375",
	I1210 06:28:08.809839  401365 command_runner.go:130] >       "username":  "nonroot",
	I1210 06:28:08.809843  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809846  401365 command_runner.go:130] >     },
	I1210 06:28:08.809850  401365 command_runner.go:130] >     {
	I1210 06:28:08.809856  401365 command_runner.go:130] >       "id":  "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1210 06:28:08.809860  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809865  401365 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1210 06:28:08.809868  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809872  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809882  401365 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:c711b5adb8fed0c7e2a62a6c327fd0f8486f90b93c1ebd0ba1e790b930373aae"
	I1210 06:28:08.809885  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809889  401365 command_runner.go:130] >       "size":  "60849030",
	I1210 06:28:08.809893  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.809897  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.809900  401365 command_runner.go:130] >       },
	I1210 06:28:08.809904  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.809908  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809911  401365 command_runner.go:130] >     },
	I1210 06:28:08.809915  401365 command_runner.go:130] >     {
	I1210 06:28:08.809921  401365 command_runner.go:130] >       "id":  "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1210 06:28:08.809925  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809934  401365 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1210 06:28:08.809938  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809941  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.809949  401365 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:112cff968330968b8d8cc75dd9f232b5b048067a2151629e56b80ff7af621b72"
	I1210 06:28:08.809954  401365 command_runner.go:130] >       ],
	I1210 06:28:08.809958  401365 command_runner.go:130] >       "size":  "85012778",
	I1210 06:28:08.809961  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.809965  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.809968  401365 command_runner.go:130] >       },
	I1210 06:28:08.809973  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.809977  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.809980  401365 command_runner.go:130] >     },
	I1210 06:28:08.809983  401365 command_runner.go:130] >     {
	I1210 06:28:08.809989  401365 command_runner.go:130] >       "id":  "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1210 06:28:08.809994  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.809999  401365 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1210 06:28:08.810002  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810006  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810014  401365 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:89a8e28214a1c4b4631281e37feaf54299e7510d43c5445d4adc88575390f71e"
	I1210 06:28:08.810017  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810021  401365 command_runner.go:130] >       "size":  "72167568",
	I1210 06:28:08.810030  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.810035  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.810038  401365 command_runner.go:130] >       },
	I1210 06:28:08.810042  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810046  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.810049  401365 command_runner.go:130] >     },
	I1210 06:28:08.810052  401365 command_runner.go:130] >     {
	I1210 06:28:08.810058  401365 command_runner.go:130] >       "id":  "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1210 06:28:08.810062  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.810068  401365 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1210 06:28:08.810072  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810076  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810086  401365 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:01f8409e5c04cbb256f29dc118ff184a24b9cbb97c7f6d3bd462333b366566ca"
	I1210 06:28:08.810089  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810093  401365 command_runner.go:130] >       "size":  "74105636",
	I1210 06:28:08.810097  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810101  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.810104  401365 command_runner.go:130] >     },
	I1210 06:28:08.810107  401365 command_runner.go:130] >     {
	I1210 06:28:08.810114  401365 command_runner.go:130] >       "id":  "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1210 06:28:08.810117  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.810127  401365 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1210 06:28:08.810131  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810134  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810144  401365 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9a2a4c91f5fdaf1a3c5e839f425cc7461b9e24d5f0816c207638df25ade3eda9"
	I1210 06:28:08.810147  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810151  401365 command_runner.go:130] >       "size":  "49819792",
	I1210 06:28:08.810154  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.810158  401365 command_runner.go:130] >         "value":  "0"
	I1210 06:28:08.810160  401365 command_runner.go:130] >       },
	I1210 06:28:08.810165  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810169  401365 command_runner.go:130] >       "pinned":  false
	I1210 06:28:08.810172  401365 command_runner.go:130] >     },
	I1210 06:28:08.810175  401365 command_runner.go:130] >     {
	I1210 06:28:08.810181  401365 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 06:28:08.810185  401365 command_runner.go:130] >       "repoTags":  [
	I1210 06:28:08.810189  401365 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 06:28:08.810192  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810196  401365 command_runner.go:130] >       "repoDigests":  [
	I1210 06:28:08.810203  401365 command_runner.go:130] >         "registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"
	I1210 06:28:08.810206  401365 command_runner.go:130] >       ],
	I1210 06:28:08.810210  401365 command_runner.go:130] >       "size":  "517328",
	I1210 06:28:08.810213  401365 command_runner.go:130] >       "uid":  {
	I1210 06:28:08.810217  401365 command_runner.go:130] >         "value":  "65535"
	I1210 06:28:08.810220  401365 command_runner.go:130] >       },
	I1210 06:28:08.810228  401365 command_runner.go:130] >       "username":  "",
	I1210 06:28:08.810232  401365 command_runner.go:130] >       "pinned":  true
	I1210 06:28:08.810234  401365 command_runner.go:130] >     }
	I1210 06:28:08.810237  401365 command_runner.go:130] >   ]
	I1210 06:28:08.810240  401365 command_runner.go:130] > }
	I1210 06:28:08.812152  401365 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:28:08.812177  401365 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:28:08.812185  401365 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1210 06:28:08.812284  401365 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-253997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:28:08.812367  401365 ssh_runner.go:195] Run: crio config
	I1210 06:28:08.860605  401365 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1210 06:28:08.860628  401365 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1210 06:28:08.860635  401365 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1210 06:28:08.860638  401365 command_runner.go:130] > #
	I1210 06:28:08.860654  401365 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1210 06:28:08.860661  401365 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1210 06:28:08.860668  401365 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1210 06:28:08.860677  401365 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1210 06:28:08.860680  401365 command_runner.go:130] > # reload'.
	I1210 06:28:08.860687  401365 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1210 06:28:08.860694  401365 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1210 06:28:08.860700  401365 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1210 06:28:08.860706  401365 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1210 06:28:08.860709  401365 command_runner.go:130] > [crio]
	I1210 06:28:08.860716  401365 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1210 06:28:08.860721  401365 command_runner.go:130] > # containers images, in this directory.
	I1210 06:28:08.860730  401365 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1210 06:28:08.860737  401365 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1210 06:28:08.860742  401365 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1210 06:28:08.860760  401365 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1210 06:28:08.860811  401365 command_runner.go:130] > # imagestore = ""
	I1210 06:28:08.860819  401365 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1210 06:28:08.860826  401365 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1210 06:28:08.860837  401365 command_runner.go:130] > # storage_driver = "overlay"
	I1210 06:28:08.860843  401365 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1210 06:28:08.860850  401365 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1210 06:28:08.860853  401365 command_runner.go:130] > # storage_option = [
	I1210 06:28:08.860857  401365 command_runner.go:130] > # ]
	I1210 06:28:08.860864  401365 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1210 06:28:08.860870  401365 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1210 06:28:08.860874  401365 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1210 06:28:08.860880  401365 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1210 06:28:08.860886  401365 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1210 06:28:08.860890  401365 command_runner.go:130] > # always happen on a node reboot
	I1210 06:28:08.860894  401365 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1210 06:28:08.860905  401365 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1210 06:28:08.860911  401365 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1210 06:28:08.860918  401365 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1210 06:28:08.860922  401365 command_runner.go:130] > # version_file_persist = ""
	I1210 06:28:08.860930  401365 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1210 06:28:08.860938  401365 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1210 06:28:08.860941  401365 command_runner.go:130] > # internal_wipe = true
	I1210 06:28:08.860950  401365 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1210 06:28:08.860955  401365 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1210 06:28:08.860959  401365 command_runner.go:130] > # internal_repair = true
	I1210 06:28:08.860964  401365 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1210 06:28:08.860971  401365 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1210 06:28:08.860976  401365 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1210 06:28:08.860981  401365 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1210 06:28:08.860987  401365 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1210 06:28:08.860991  401365 command_runner.go:130] > [crio.api]
	I1210 06:28:08.860997  401365 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1210 06:28:08.861001  401365 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1210 06:28:08.861006  401365 command_runner.go:130] > # IP address on which the stream server will listen.
	I1210 06:28:08.861010  401365 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1210 06:28:08.861017  401365 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1210 06:28:08.861026  401365 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1210 06:28:08.861030  401365 command_runner.go:130] > # stream_port = "0"
	I1210 06:28:08.861035  401365 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1210 06:28:08.861040  401365 command_runner.go:130] > # stream_enable_tls = false
	I1210 06:28:08.861046  401365 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1210 06:28:08.861050  401365 command_runner.go:130] > # stream_idle_timeout = ""
	I1210 06:28:08.861056  401365 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1210 06:28:08.861062  401365 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1210 06:28:08.861066  401365 command_runner.go:130] > # stream_tls_cert = ""
	I1210 06:28:08.861072  401365 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1210 06:28:08.861077  401365 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1210 06:28:08.861081  401365 command_runner.go:130] > # stream_tls_key = ""
	I1210 06:28:08.861087  401365 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1210 06:28:08.861093  401365 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1210 06:28:08.861097  401365 command_runner.go:130] > # automatically pick up the changes.
	I1210 06:28:08.861446  401365 command_runner.go:130] > # stream_tls_ca = ""
	I1210 06:28:08.861478  401365 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 06:28:08.861569  401365 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1210 06:28:08.861581  401365 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 06:28:08.861586  401365 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1210 06:28:08.861593  401365 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1210 06:28:08.861599  401365 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1210 06:28:08.861602  401365 command_runner.go:130] > [crio.runtime]
	I1210 06:28:08.861609  401365 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1210 06:28:08.861614  401365 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1210 06:28:08.861628  401365 command_runner.go:130] > # "nofile=1024:2048"
	I1210 06:28:08.861634  401365 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1210 06:28:08.861638  401365 command_runner.go:130] > # default_ulimits = [
	I1210 06:28:08.861653  401365 command_runner.go:130] > # ]
	I1210 06:28:08.861660  401365 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1210 06:28:08.861663  401365 command_runner.go:130] > # no_pivot = false
	I1210 06:28:08.861669  401365 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1210 06:28:08.861675  401365 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1210 06:28:08.861681  401365 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1210 06:28:08.861687  401365 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1210 06:28:08.861696  401365 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1210 06:28:08.861703  401365 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 06:28:08.861707  401365 command_runner.go:130] > # conmon = ""
	I1210 06:28:08.861711  401365 command_runner.go:130] > # Cgroup setting for conmon
	I1210 06:28:08.861718  401365 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1210 06:28:08.861722  401365 command_runner.go:130] > conmon_cgroup = "pod"
	I1210 06:28:08.861728  401365 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1210 06:28:08.861733  401365 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1210 06:28:08.861740  401365 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 06:28:08.861744  401365 command_runner.go:130] > # conmon_env = [
	I1210 06:28:08.861747  401365 command_runner.go:130] > # ]
	I1210 06:28:08.861753  401365 command_runner.go:130] > # Additional environment variables to set for all the
	I1210 06:28:08.861758  401365 command_runner.go:130] > # containers. These are overridden if set in the
	I1210 06:28:08.861764  401365 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1210 06:28:08.861768  401365 command_runner.go:130] > # default_env = [
	I1210 06:28:08.861771  401365 command_runner.go:130] > # ]
	I1210 06:28:08.861787  401365 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1210 06:28:08.861795  401365 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1210 06:28:08.861799  401365 command_runner.go:130] > # selinux = false
	I1210 06:28:08.861809  401365 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1210 06:28:08.861817  401365 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1210 06:28:08.861823  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862101  401365 command_runner.go:130] > # seccomp_profile = ""
	I1210 06:28:08.862113  401365 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1210 06:28:08.862119  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862201  401365 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1210 06:28:08.862211  401365 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1210 06:28:08.862225  401365 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1210 06:28:08.862232  401365 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1210 06:28:08.862239  401365 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1210 06:28:08.862244  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862248  401365 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1210 06:28:08.862254  401365 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1210 06:28:08.862259  401365 command_runner.go:130] > # the cgroup blockio controller.
	I1210 06:28:08.862263  401365 command_runner.go:130] > # blockio_config_file = ""
	I1210 06:28:08.862273  401365 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1210 06:28:08.862283  401365 command_runner.go:130] > # blockio parameters.
	I1210 06:28:08.862294  401365 command_runner.go:130] > # blockio_reload = false
	I1210 06:28:08.862301  401365 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1210 06:28:08.862304  401365 command_runner.go:130] > # irqbalance daemon.
	I1210 06:28:08.862310  401365 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1210 06:28:08.862316  401365 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1210 06:28:08.862323  401365 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1210 06:28:08.862330  401365 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1210 06:28:08.862336  401365 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1210 06:28:08.862342  401365 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1210 06:28:08.862347  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.862351  401365 command_runner.go:130] > # rdt_config_file = ""
	I1210 06:28:08.862356  401365 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1210 06:28:08.862384  401365 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1210 06:28:08.862391  401365 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1210 06:28:08.862666  401365 command_runner.go:130] > # separate_pull_cgroup = ""
	I1210 06:28:08.862678  401365 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1210 06:28:08.862685  401365 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1210 06:28:08.862689  401365 command_runner.go:130] > # will be added.
	I1210 06:28:08.862693  401365 command_runner.go:130] > # default_capabilities = [
	I1210 06:28:08.862777  401365 command_runner.go:130] > # 	"CHOWN",
	I1210 06:28:08.862786  401365 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1210 06:28:08.862797  401365 command_runner.go:130] > # 	"FSETID",
	I1210 06:28:08.862802  401365 command_runner.go:130] > # 	"FOWNER",
	I1210 06:28:08.862806  401365 command_runner.go:130] > # 	"SETGID",
	I1210 06:28:08.862809  401365 command_runner.go:130] > # 	"SETUID",
	I1210 06:28:08.862838  401365 command_runner.go:130] > # 	"SETPCAP",
	I1210 06:28:08.862844  401365 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1210 06:28:08.862847  401365 command_runner.go:130] > # 	"KILL",
	I1210 06:28:08.862850  401365 command_runner.go:130] > # ]
	I1210 06:28:08.862858  401365 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1210 06:28:08.862865  401365 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1210 06:28:08.863095  401365 command_runner.go:130] > # add_inheritable_capabilities = false
	I1210 06:28:08.863106  401365 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1210 06:28:08.863112  401365 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 06:28:08.863116  401365 command_runner.go:130] > default_sysctls = [
	I1210 06:28:08.863203  401365 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1210 06:28:08.863243  401365 command_runner.go:130] > ]
	I1210 06:28:08.863252  401365 command_runner.go:130] > # List of devices on the host that a
	I1210 06:28:08.863259  401365 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1210 06:28:08.863263  401365 command_runner.go:130] > # allowed_devices = [
	I1210 06:28:08.863314  401365 command_runner.go:130] > # 	"/dev/fuse",
	I1210 06:28:08.863326  401365 command_runner.go:130] > # 	"/dev/net/tun",
	I1210 06:28:08.863333  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863338  401365 command_runner.go:130] > # List of additional devices. specified as
	I1210 06:28:08.863345  401365 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1210 06:28:08.863351  401365 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1210 06:28:08.863357  401365 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 06:28:08.863361  401365 command_runner.go:130] > # additional_devices = [
	I1210 06:28:08.863363  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863368  401365 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1210 06:28:08.863372  401365 command_runner.go:130] > # cdi_spec_dirs = [
	I1210 06:28:08.863376  401365 command_runner.go:130] > # 	"/etc/cdi",
	I1210 06:28:08.863379  401365 command_runner.go:130] > # 	"/var/run/cdi",
	I1210 06:28:08.863382  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863388  401365 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1210 06:28:08.863394  401365 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1210 06:28:08.863398  401365 command_runner.go:130] > # Defaults to false.
	I1210 06:28:08.863403  401365 command_runner.go:130] > # device_ownership_from_security_context = false
	I1210 06:28:08.863410  401365 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1210 06:28:08.863415  401365 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1210 06:28:08.863419  401365 command_runner.go:130] > # hooks_dir = [
	I1210 06:28:08.863604  401365 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1210 06:28:08.863612  401365 command_runner.go:130] > # ]
	I1210 06:28:08.863618  401365 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1210 06:28:08.863625  401365 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1210 06:28:08.863630  401365 command_runner.go:130] > # its default mounts from the following two files:
	I1210 06:28:08.863633  401365 command_runner.go:130] > #
	I1210 06:28:08.863640  401365 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1210 06:28:08.863646  401365 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1210 06:28:08.863652  401365 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1210 06:28:08.863655  401365 command_runner.go:130] > #
	I1210 06:28:08.863661  401365 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1210 06:28:08.863676  401365 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1210 06:28:08.863683  401365 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1210 06:28:08.863687  401365 command_runner.go:130] > #      only add mounts it finds in this file.
	I1210 06:28:08.863690  401365 command_runner.go:130] > #
	I1210 06:28:08.863719  401365 command_runner.go:130] > # default_mounts_file = ""
	I1210 06:28:08.863725  401365 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1210 06:28:08.863732  401365 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1210 06:28:08.863736  401365 command_runner.go:130] > # pids_limit = -1
	I1210 06:28:08.863742  401365 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1210 06:28:08.863748  401365 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1210 06:28:08.863761  401365 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1210 06:28:08.863771  401365 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1210 06:28:08.863775  401365 command_runner.go:130] > # log_size_max = -1
	I1210 06:28:08.863782  401365 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1210 06:28:08.863786  401365 command_runner.go:130] > # log_to_journald = false
	I1210 06:28:08.863792  401365 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1210 06:28:08.863974  401365 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1210 06:28:08.863984  401365 command_runner.go:130] > # Path to directory for container attach sockets.
	I1210 06:28:08.863990  401365 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1210 06:28:08.863996  401365 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1210 06:28:08.864082  401365 command_runner.go:130] > # bind_mount_prefix = ""
	I1210 06:28:08.864098  401365 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1210 06:28:08.864139  401365 command_runner.go:130] > # read_only = false
	I1210 06:28:08.864149  401365 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1210 06:28:08.864156  401365 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1210 06:28:08.864159  401365 command_runner.go:130] > # live configuration reload.
	I1210 06:28:08.864163  401365 command_runner.go:130] > # log_level = "info"
	I1210 06:28:08.864169  401365 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1210 06:28:08.864174  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.864178  401365 command_runner.go:130] > # log_filter = ""
	I1210 06:28:08.864183  401365 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1210 06:28:08.864190  401365 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1210 06:28:08.864193  401365 command_runner.go:130] > # separated by comma.
	I1210 06:28:08.864208  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864211  401365 command_runner.go:130] > # uid_mappings = ""
	I1210 06:28:08.864218  401365 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1210 06:28:08.864224  401365 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1210 06:28:08.864228  401365 command_runner.go:130] > # separated by comma.
	I1210 06:28:08.864236  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864440  401365 command_runner.go:130] > # gid_mappings = ""
	I1210 06:28:08.864451  401365 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1210 06:28:08.864458  401365 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 06:28:08.864465  401365 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 06:28:08.864473  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864477  401365 command_runner.go:130] > # minimum_mappable_uid = -1
	I1210 06:28:08.864483  401365 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1210 06:28:08.864493  401365 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 06:28:08.864501  401365 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 06:28:08.864514  401365 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 06:28:08.864541  401365 command_runner.go:130] > # minimum_mappable_gid = -1
	I1210 06:28:08.864548  401365 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1210 06:28:08.864555  401365 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1210 06:28:08.864560  401365 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1210 06:28:08.864572  401365 command_runner.go:130] > # ctr_stop_timeout = 30
	I1210 06:28:08.864578  401365 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1210 06:28:08.864588  401365 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1210 06:28:08.864593  401365 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1210 06:28:08.864598  401365 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1210 06:28:08.864602  401365 command_runner.go:130] > # drop_infra_ctr = true
	I1210 06:28:08.864608  401365 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1210 06:28:08.864613  401365 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1210 06:28:08.864621  401365 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1210 06:28:08.864625  401365 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1210 06:28:08.864632  401365 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1210 06:28:08.864638  401365 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1210 06:28:08.864644  401365 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1210 06:28:08.864649  401365 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1210 06:28:08.864653  401365 command_runner.go:130] > # shared_cpuset = ""
	I1210 06:28:08.864659  401365 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1210 06:28:08.864664  401365 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1210 06:28:08.864668  401365 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1210 06:28:08.864675  401365 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1210 06:28:08.864858  401365 command_runner.go:130] > # pinns_path = ""
	I1210 06:28:08.864869  401365 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1210 06:28:08.864876  401365 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1210 06:28:08.864881  401365 command_runner.go:130] > # enable_criu_support = true
	I1210 06:28:08.864886  401365 command_runner.go:130] > # Enable/disable the generation of the container,
	I1210 06:28:08.864892  401365 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1210 06:28:08.864935  401365 command_runner.go:130] > # enable_pod_events = false
	I1210 06:28:08.864946  401365 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1210 06:28:08.864960  401365 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1210 06:28:08.865092  401365 command_runner.go:130] > # default_runtime = "crun"
	I1210 06:28:08.865104  401365 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1210 06:28:08.865112  401365 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1210 06:28:08.865122  401365 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1210 06:28:08.865127  401365 command_runner.go:130] > # creation as a file is not desired either.
	I1210 06:28:08.865136  401365 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1210 06:28:08.865141  401365 command_runner.go:130] > # the hostname is being managed dynamically.
	I1210 06:28:08.865146  401365 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1210 06:28:08.865148  401365 command_runner.go:130] > # ]
	I1210 06:28:08.865158  401365 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1210 06:28:08.865165  401365 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1210 06:28:08.865171  401365 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1210 06:28:08.865177  401365 command_runner.go:130] > # Each entry in the table should follow the format:
	I1210 06:28:08.865179  401365 command_runner.go:130] > #
	I1210 06:28:08.865200  401365 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1210 06:28:08.865207  401365 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1210 06:28:08.865210  401365 command_runner.go:130] > # runtime_type = "oci"
	I1210 06:28:08.865215  401365 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1210 06:28:08.865219  401365 command_runner.go:130] > # inherit_default_runtime = false
	I1210 06:28:08.865224  401365 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1210 06:28:08.865229  401365 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1210 06:28:08.865233  401365 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1210 06:28:08.865236  401365 command_runner.go:130] > # monitor_env = []
	I1210 06:28:08.865241  401365 command_runner.go:130] > # privileged_without_host_devices = false
	I1210 06:28:08.865245  401365 command_runner.go:130] > # allowed_annotations = []
	I1210 06:28:08.865250  401365 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1210 06:28:08.865253  401365 command_runner.go:130] > # no_sync_log = false
	I1210 06:28:08.865257  401365 command_runner.go:130] > # default_annotations = {}
	I1210 06:28:08.865261  401365 command_runner.go:130] > # stream_websockets = false
	I1210 06:28:08.865265  401365 command_runner.go:130] > # seccomp_profile = ""
	I1210 06:28:08.865296  401365 command_runner.go:130] > # Where:
	I1210 06:28:08.865301  401365 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1210 06:28:08.865308  401365 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1210 06:28:08.865314  401365 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1210 06:28:08.865320  401365 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1210 06:28:08.865323  401365 command_runner.go:130] > #   in $PATH.
	I1210 06:28:08.865330  401365 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1210 06:28:08.865334  401365 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1210 06:28:08.865341  401365 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1210 06:28:08.865344  401365 command_runner.go:130] > #   state.
	I1210 06:28:08.865352  401365 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1210 06:28:08.865360  401365 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1210 06:28:08.865368  401365 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1210 06:28:08.865376  401365 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1210 06:28:08.865381  401365 command_runner.go:130] > #   the values from the default runtime on load time.
	I1210 06:28:08.865387  401365 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1210 06:28:08.865392  401365 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1210 06:28:08.865399  401365 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1210 06:28:08.865406  401365 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1210 06:28:08.865411  401365 command_runner.go:130] > #   The currently recognized values are:
	I1210 06:28:08.865417  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1210 06:28:08.865425  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1210 06:28:08.865431  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1210 06:28:08.865437  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1210 06:28:08.865444  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1210 06:28:08.865451  401365 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1210 06:28:08.865458  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1210 06:28:08.865464  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1210 06:28:08.865470  401365 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1210 06:28:08.865492  401365 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1210 06:28:08.865501  401365 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1210 06:28:08.865507  401365 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1210 06:28:08.865513  401365 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1210 06:28:08.865519  401365 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1210 06:28:08.865525  401365 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1210 06:28:08.865533  401365 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1210 06:28:08.865539  401365 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1210 06:28:08.865552  401365 command_runner.go:130] > #   deprecated option "conmon".
	I1210 06:28:08.865560  401365 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1210 06:28:08.865565  401365 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1210 06:28:08.865572  401365 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1210 06:28:08.865578  401365 command_runner.go:130] > #   should be moved to the container's cgroup
	I1210 06:28:08.865587  401365 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1210 06:28:08.865592  401365 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1210 06:28:08.865599  401365 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1210 06:28:08.865607  401365 command_runner.go:130] > #   conmon-rs by using:
	I1210 06:28:08.865615  401365 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1210 06:28:08.865622  401365 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1210 06:28:08.865630  401365 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1210 06:28:08.865636  401365 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1210 06:28:08.865642  401365 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1210 06:28:08.865649  401365 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1210 06:28:08.865657  401365 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1210 06:28:08.865661  401365 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1210 06:28:08.865669  401365 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1210 06:28:08.865677  401365 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1210 06:28:08.865685  401365 command_runner.go:130] > #   when a machine crash happens.
	I1210 06:28:08.865693  401365 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1210 06:28:08.865700  401365 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1210 06:28:08.865708  401365 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1210 06:28:08.865713  401365 command_runner.go:130] > #   seccomp profile for the runtime.
	I1210 06:28:08.865719  401365 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1210 06:28:08.865744  401365 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1210 06:28:08.865747  401365 command_runner.go:130] > #
	I1210 06:28:08.865751  401365 command_runner.go:130] > # Using the seccomp notifier feature:
	I1210 06:28:08.865754  401365 command_runner.go:130] > #
	I1210 06:28:08.865762  401365 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1210 06:28:08.865768  401365 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1210 06:28:08.865771  401365 command_runner.go:130] > #
	I1210 06:28:08.865777  401365 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1210 06:28:08.865783  401365 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1210 06:28:08.865785  401365 command_runner.go:130] > #
	I1210 06:28:08.865793  401365 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1210 06:28:08.865797  401365 command_runner.go:130] > # feature.
	I1210 06:28:08.865800  401365 command_runner.go:130] > #
	I1210 06:28:08.865807  401365 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1210 06:28:08.865813  401365 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1210 06:28:08.865819  401365 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1210 06:28:08.865832  401365 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1210 06:28:08.865838  401365 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1210 06:28:08.865841  401365 command_runner.go:130] > #
	I1210 06:28:08.865847  401365 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1210 06:28:08.865853  401365 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1210 06:28:08.865856  401365 command_runner.go:130] > #
	I1210 06:28:08.865862  401365 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1210 06:28:08.865870  401365 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1210 06:28:08.865873  401365 command_runner.go:130] > #
	I1210 06:28:08.865880  401365 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1210 06:28:08.865885  401365 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1210 06:28:08.865889  401365 command_runner.go:130] > # limitation.
	I1210 06:28:08.865905  401365 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1210 06:28:08.866331  401365 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1210 06:28:08.866426  401365 command_runner.go:130] > runtime_type = ""
	I1210 06:28:08.866446  401365 command_runner.go:130] > runtime_root = "/run/crun"
	I1210 06:28:08.866464  401365 command_runner.go:130] > inherit_default_runtime = false
	I1210 06:28:08.866497  401365 command_runner.go:130] > runtime_config_path = ""
	I1210 06:28:08.866524  401365 command_runner.go:130] > container_min_memory = ""
	I1210 06:28:08.866577  401365 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 06:28:08.866606  401365 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 06:28:08.866632  401365 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 06:28:08.866654  401365 command_runner.go:130] > allowed_annotations = [
	I1210 06:28:08.866675  401365 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1210 06:28:08.866694  401365 command_runner.go:130] > ]
	I1210 06:28:08.866728  401365 command_runner.go:130] > privileged_without_host_devices = false
	I1210 06:28:08.866748  401365 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1210 06:28:08.866769  401365 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1210 06:28:08.866790  401365 command_runner.go:130] > runtime_type = ""
	I1210 06:28:08.866821  401365 command_runner.go:130] > runtime_root = "/run/runc"
	I1210 06:28:08.866840  401365 command_runner.go:130] > inherit_default_runtime = false
	I1210 06:28:08.866860  401365 command_runner.go:130] > runtime_config_path = ""
	I1210 06:28:08.866880  401365 command_runner.go:130] > container_min_memory = ""
	I1210 06:28:08.866908  401365 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 06:28:08.866932  401365 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 06:28:08.866953  401365 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 06:28:08.866974  401365 command_runner.go:130] > privileged_without_host_devices = false
	I1210 06:28:08.867007  401365 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1210 06:28:08.867043  401365 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1210 06:28:08.867068  401365 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1210 06:28:08.867104  401365 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1210 06:28:08.867134  401365 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1210 06:28:08.867162  401365 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1210 06:28:08.867185  401365 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1210 06:28:08.867213  401365 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1210 06:28:08.867246  401365 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1210 06:28:08.867272  401365 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1210 06:28:08.867293  401365 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1210 06:28:08.867324  401365 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1210 06:28:08.867347  401365 command_runner.go:130] > # Example:
	I1210 06:28:08.867368  401365 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1210 06:28:08.867390  401365 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1210 06:28:08.867422  401365 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1210 06:28:08.867444  401365 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1210 06:28:08.867461  401365 command_runner.go:130] > # cpuset = "0-1"
	I1210 06:28:08.867481  401365 command_runner.go:130] > # cpushares = "5"
	I1210 06:28:08.867501  401365 command_runner.go:130] > # cpuquota = "1000"
	I1210 06:28:08.867527  401365 command_runner.go:130] > # cpuperiod = "100000"
	I1210 06:28:08.867550  401365 command_runner.go:130] > # cpulimit = "35"
	I1210 06:28:08.867570  401365 command_runner.go:130] > # Where:
	I1210 06:28:08.867591  401365 command_runner.go:130] > # The workload name is workload-type.
	I1210 06:28:08.867625  401365 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1210 06:28:08.867647  401365 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1210 06:28:08.867667  401365 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1210 06:28:08.867691  401365 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1210 06:28:08.867724  401365 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1210 06:28:08.867747  401365 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1210 06:28:08.867767  401365 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1210 06:28:08.867786  401365 command_runner.go:130] > # Default value is set to true
	I1210 06:28:08.867808  401365 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1210 06:28:08.867842  401365 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1210 06:28:08.867862  401365 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1210 06:28:08.867882  401365 command_runner.go:130] > # Default value is set to 'false'
	I1210 06:28:08.867915  401365 command_runner.go:130] > # disable_hostport_mapping = false
	I1210 06:28:08.867942  401365 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1210 06:28:08.867964  401365 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1210 06:28:08.867982  401365 command_runner.go:130] > # timezone = ""
	I1210 06:28:08.868015  401365 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1210 06:28:08.868041  401365 command_runner.go:130] > #
	I1210 06:28:08.868060  401365 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1210 06:28:08.868081  401365 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1210 06:28:08.868110  401365 command_runner.go:130] > [crio.image]
	I1210 06:28:08.868133  401365 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1210 06:28:08.868150  401365 command_runner.go:130] > # default_transport = "docker://"
	I1210 06:28:08.868170  401365 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1210 06:28:08.868192  401365 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1210 06:28:08.868219  401365 command_runner.go:130] > # global_auth_file = ""
	I1210 06:28:08.868243  401365 command_runner.go:130] > # The image used to instantiate infra containers.
	I1210 06:28:08.868264  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.868284  401365 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1210 06:28:08.868317  401365 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1210 06:28:08.868338  401365 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1210 06:28:08.868357  401365 command_runner.go:130] > # This option supports live configuration reload.
	I1210 06:28:08.868374  401365 command_runner.go:130] > # pause_image_auth_file = ""
	I1210 06:28:08.868396  401365 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1210 06:28:08.868423  401365 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1210 06:28:08.868450  401365 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1210 06:28:08.868474  401365 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1210 06:28:08.868753  401365 command_runner.go:130] > # pause_command = "/pause"
	I1210 06:28:08.868765  401365 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1210 06:28:08.868772  401365 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1210 06:28:08.868778  401365 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1210 06:28:08.868784  401365 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1210 06:28:08.868791  401365 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1210 06:28:08.868797  401365 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1210 06:28:08.868802  401365 command_runner.go:130] > # pinned_images = [
	I1210 06:28:08.868834  401365 command_runner.go:130] > # ]
	I1210 06:28:08.868841  401365 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1210 06:28:08.868848  401365 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1210 06:28:08.868855  401365 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1210 06:28:08.868864  401365 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1210 06:28:08.868877  401365 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1210 06:28:08.868892  401365 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1210 06:28:08.868897  401365 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1210 06:28:08.868904  401365 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1210 06:28:08.868911  401365 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1210 06:28:08.868917  401365 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1210 06:28:08.868924  401365 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1210 06:28:08.868928  401365 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1210 06:28:08.868935  401365 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1210 06:28:08.868941  401365 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1210 06:28:08.868945  401365 command_runner.go:130] > # changing them here.
	I1210 06:28:08.868950  401365 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1210 06:28:08.868954  401365 command_runner.go:130] > # insecure_registries = [
	I1210 06:28:08.868957  401365 command_runner.go:130] > # ]
	I1210 06:28:08.868964  401365 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1210 06:28:08.868968  401365 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1210 06:28:08.868972  401365 command_runner.go:130] > # image_volumes = "mkdir"
	I1210 06:28:08.868978  401365 command_runner.go:130] > # Temporary directory to use for storing big files
	I1210 06:28:08.868982  401365 command_runner.go:130] > # big_files_temporary_dir = ""
	I1210 06:28:08.868988  401365 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1210 06:28:08.868995  401365 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1210 06:28:08.868999  401365 command_runner.go:130] > # auto_reload_registries = false
	I1210 06:28:08.869006  401365 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1210 06:28:08.869014  401365 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1210 06:28:08.869022  401365 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1210 06:28:08.869027  401365 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1210 06:28:08.869031  401365 command_runner.go:130] > # The mode of short name resolution.
	I1210 06:28:08.869039  401365 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1210 06:28:08.869047  401365 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1210 06:28:08.869051  401365 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1210 06:28:08.869055  401365 command_runner.go:130] > # short_name_mode = "enforcing"
	I1210 06:28:08.869061  401365 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1210 06:28:08.869067  401365 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1210 06:28:08.869299  401365 command_runner.go:130] > # oci_artifact_mount_support = true
	I1210 06:28:08.869316  401365 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1210 06:28:08.869329  401365 command_runner.go:130] > # CNI plugins.
	I1210 06:28:08.869333  401365 command_runner.go:130] > [crio.network]
	I1210 06:28:08.869340  401365 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1210 06:28:08.869346  401365 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1210 06:28:08.869485  401365 command_runner.go:130] > # cni_default_network = ""
	I1210 06:28:08.869502  401365 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1210 06:28:08.869709  401365 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1210 06:28:08.869721  401365 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1210 06:28:08.869725  401365 command_runner.go:130] > # plugin_dirs = [
	I1210 06:28:08.869729  401365 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1210 06:28:08.869732  401365 command_runner.go:130] > # ]
	I1210 06:28:08.869736  401365 command_runner.go:130] > # List of included pod metrics.
	I1210 06:28:08.869740  401365 command_runner.go:130] > # included_pod_metrics = [
	I1210 06:28:08.869743  401365 command_runner.go:130] > # ]
	I1210 06:28:08.869749  401365 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1210 06:28:08.869752  401365 command_runner.go:130] > [crio.metrics]
	I1210 06:28:08.869757  401365 command_runner.go:130] > # Globally enable or disable metrics support.
	I1210 06:28:08.869763  401365 command_runner.go:130] > # enable_metrics = false
	I1210 06:28:08.869767  401365 command_runner.go:130] > # Specify enabled metrics collectors.
	I1210 06:28:08.869772  401365 command_runner.go:130] > # Per default all metrics are enabled.
	I1210 06:28:08.869778  401365 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1210 06:28:08.869785  401365 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1210 06:28:08.869791  401365 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1210 06:28:08.869796  401365 command_runner.go:130] > # metrics_collectors = [
	I1210 06:28:08.869800  401365 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1210 06:28:08.869805  401365 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1210 06:28:08.869809  401365 command_runner.go:130] > # 	"containers_oom_total",
	I1210 06:28:08.869813  401365 command_runner.go:130] > # 	"processes_defunct",
	I1210 06:28:08.869817  401365 command_runner.go:130] > # 	"operations_total",
	I1210 06:28:08.869821  401365 command_runner.go:130] > # 	"operations_latency_seconds",
	I1210 06:28:08.869826  401365 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1210 06:28:08.869830  401365 command_runner.go:130] > # 	"operations_errors_total",
	I1210 06:28:08.869834  401365 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1210 06:28:08.869839  401365 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1210 06:28:08.869843  401365 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1210 06:28:08.869851  401365 command_runner.go:130] > # 	"image_pulls_success_total",
	I1210 06:28:08.869855  401365 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1210 06:28:08.869860  401365 command_runner.go:130] > # 	"containers_oom_count_total",
	I1210 06:28:08.869865  401365 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1210 06:28:08.869873  401365 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1210 06:28:08.869878  401365 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1210 06:28:08.869881  401365 command_runner.go:130] > # ]
	I1210 06:28:08.869887  401365 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1210 06:28:08.869891  401365 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1210 06:28:08.869896  401365 command_runner.go:130] > # The port on which the metrics server will listen.
	I1210 06:28:08.869901  401365 command_runner.go:130] > # metrics_port = 9090
	I1210 06:28:08.869906  401365 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1210 06:28:08.869910  401365 command_runner.go:130] > # metrics_socket = ""
	I1210 06:28:08.869915  401365 command_runner.go:130] > # The certificate for the secure metrics server.
	I1210 06:28:08.869921  401365 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1210 06:28:08.869928  401365 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1210 06:28:08.869934  401365 command_runner.go:130] > # certificate on any modification event.
	I1210 06:28:08.869938  401365 command_runner.go:130] > # metrics_cert = ""
	I1210 06:28:08.869943  401365 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1210 06:28:08.869948  401365 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1210 06:28:08.869963  401365 command_runner.go:130] > # metrics_key = ""
	I1210 06:28:08.869970  401365 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1210 06:28:08.869973  401365 command_runner.go:130] > [crio.tracing]
	I1210 06:28:08.869978  401365 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1210 06:28:08.869982  401365 command_runner.go:130] > # enable_tracing = false
	I1210 06:28:08.869987  401365 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1210 06:28:08.869992  401365 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1210 06:28:08.869999  401365 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1210 06:28:08.870003  401365 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1210 06:28:08.870007  401365 command_runner.go:130] > # CRI-O NRI configuration.
	I1210 06:28:08.870010  401365 command_runner.go:130] > [crio.nri]
	I1210 06:28:08.870014  401365 command_runner.go:130] > # Globally enable or disable NRI.
	I1210 06:28:08.870018  401365 command_runner.go:130] > # enable_nri = true
	I1210 06:28:08.870022  401365 command_runner.go:130] > # NRI socket to listen on.
	I1210 06:28:08.870026  401365 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1210 06:28:08.870031  401365 command_runner.go:130] > # NRI plugin directory to use.
	I1210 06:28:08.870035  401365 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1210 06:28:08.870044  401365 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1210 06:28:08.870049  401365 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1210 06:28:08.870054  401365 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1210 06:28:08.870120  401365 command_runner.go:130] > # nri_disable_connections = false
	I1210 06:28:08.870126  401365 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1210 06:28:08.870131  401365 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1210 06:28:08.870136  401365 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1210 06:28:08.870140  401365 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1210 06:28:08.870144  401365 command_runner.go:130] > # NRI default validator configuration.
	I1210 06:28:08.870151  401365 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1210 06:28:08.870158  401365 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1210 06:28:08.870166  401365 command_runner.go:130] > # can be restricted/rejected:
	I1210 06:28:08.870170  401365 command_runner.go:130] > # - OCI hook injection
	I1210 06:28:08.870176  401365 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1210 06:28:08.870182  401365 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1210 06:28:08.870187  401365 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1210 06:28:08.870192  401365 command_runner.go:130] > # - adjustment of linux namespaces
	I1210 06:28:08.870198  401365 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1210 06:28:08.870204  401365 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1210 06:28:08.870211  401365 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1210 06:28:08.870214  401365 command_runner.go:130] > #
	I1210 06:28:08.870219  401365 command_runner.go:130] > # [crio.nri.default_validator]
	I1210 06:28:08.870224  401365 command_runner.go:130] > # nri_enable_default_validator = false
	I1210 06:28:08.870229  401365 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1210 06:28:08.870235  401365 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1210 06:28:08.870240  401365 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1210 06:28:08.870245  401365 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1210 06:28:08.870249  401365 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1210 06:28:08.870254  401365 command_runner.go:130] > # nri_validator_required_plugins = [
	I1210 06:28:08.870256  401365 command_runner.go:130] > # ]
	I1210 06:28:08.870261  401365 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1210 06:28:08.870267  401365 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1210 06:28:08.870270  401365 command_runner.go:130] > [crio.stats]
	I1210 06:28:08.870279  401365 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1210 06:28:08.870285  401365 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1210 06:28:08.870289  401365 command_runner.go:130] > # stats_collection_period = 0
	I1210 06:28:08.870295  401365 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1210 06:28:08.870301  401365 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1210 06:28:08.870309  401365 command_runner.go:130] > # collection_period = 0
	I1210 06:28:08.872234  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.838776003Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1210 06:28:08.872284  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.838812886Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1210 06:28:08.872309  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.838840094Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1210 06:28:08.872334  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.839193559Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1210 06:28:08.872381  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.839375723Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:28:08.872413  401365 command_runner.go:130] ! time="2025-12-10T06:28:08.839707715Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1210 06:28:08.872441  401365 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1210 06:28:08.872553  401365 cni.go:84] Creating CNI manager for ""
	I1210 06:28:08.872583  401365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:28:08.872624  401365 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:28:08.872677  401365 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-253997 NodeName:functional-253997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:28:08.872842  401365 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-253997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:28:08.872963  401365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:28:08.882589  401365 command_runner.go:130] > kubeadm
	I1210 06:28:08.882664  401365 command_runner.go:130] > kubectl
	I1210 06:28:08.882683  401365 command_runner.go:130] > kubelet
	I1210 06:28:08.883772  401365 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:28:08.883860  401365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:28:08.894311  401365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 06:28:08.917477  401365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:28:08.933123  401365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1210 06:28:08.951215  401365 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:28:08.955022  401365 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 06:28:08.955137  401365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:28:09.068336  401365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:28:09.626369  401365 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997 for IP: 192.168.49.2
	I1210 06:28:09.626393  401365 certs.go:195] generating shared ca certs ...
	I1210 06:28:09.626411  401365 certs.go:227] acquiring lock for ca certs: {Name:mk6fdc2cadbd147112797261b2432f4e1e90b685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:09.626560  401365 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key
	I1210 06:28:09.626610  401365 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key
	I1210 06:28:09.626622  401365 certs.go:257] generating profile certs ...
	I1210 06:28:09.626723  401365 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key
	I1210 06:28:09.626797  401365 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key.d56e9423
	I1210 06:28:09.626842  401365 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key
	I1210 06:28:09.626855  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 06:28:09.626868  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 06:28:09.626879  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 06:28:09.626895  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 06:28:09.626917  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 06:28:09.626934  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 06:28:09.626951  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 06:28:09.626967  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 06:28:09.627018  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem (1338 bytes)
	W1210 06:28:09.627054  401365 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265_empty.pem, impossibly tiny 0 bytes
	I1210 06:28:09.627067  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:28:09.627098  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:28:09.627129  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:28:09.627160  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem (1675 bytes)
	I1210 06:28:09.627208  401365 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:28:09.627243  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.627257  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem -> /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.627269  401365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> /usr/share/ca-certificates/3642652.pem
	I1210 06:28:09.627907  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:28:09.646839  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:28:09.665451  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:28:09.684144  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:28:09.703168  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:28:09.722766  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:28:09.740755  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:28:09.758979  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:28:09.777915  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:28:09.796193  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem --> /usr/share/ca-certificates/364265.pem (1338 bytes)
	I1210 06:28:09.814097  401365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /usr/share/ca-certificates/3642652.pem (1708 bytes)
	I1210 06:28:09.831978  401365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:28:09.845391  401365 ssh_runner.go:195] Run: openssl version
	I1210 06:28:09.851779  401365 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 06:28:09.852274  401365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.860146  401365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:28:09.868064  401365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.872198  401365 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.872310  401365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.872381  401365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:28:09.915298  401365 command_runner.go:130] > b5213941
	I1210 06:28:09.915776  401365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:28:09.923881  401365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.931564  401365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364265.pem /etc/ssl/certs/364265.pem
	I1210 06:28:09.939347  401365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.943515  401365 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.943602  401365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.943706  401365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364265.pem
	I1210 06:28:09.984596  401365 command_runner.go:130] > 51391683
	I1210 06:28:09.985095  401365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:28:09.992884  401365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.000682  401365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3642652.pem /etc/ssl/certs/3642652.pem
	I1210 06:28:10.009973  401365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.015475  401365 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.015546  401365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.015611  401365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3642652.pem
	I1210 06:28:10.058412  401365 command_runner.go:130] > 3ec20f2e
	I1210 06:28:10.059028  401365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:28:10.067481  401365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:28:10.072097  401365 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:28:10.072141  401365 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 06:28:10.072148  401365 command_runner.go:130] > Device: 259,1	Inode: 3906312     Links: 1
	I1210 06:28:10.072155  401365 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 06:28:10.072162  401365 command_runner.go:130] > Access: 2025-12-10 06:24:00.744386425 +0000
	I1210 06:28:10.072185  401365 command_runner.go:130] > Modify: 2025-12-10 06:19:55.737291822 +0000
	I1210 06:28:10.072211  401365 command_runner.go:130] > Change: 2025-12-10 06:19:55.737291822 +0000
	I1210 06:28:10.072217  401365 command_runner.go:130] >  Birth: 2025-12-10 06:19:55.737291822 +0000
	I1210 06:28:10.072295  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:28:10.114065  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.114701  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:28:10.156441  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.157041  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:28:10.198547  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.198997  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:28:10.239473  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.239921  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:28:10.280741  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.281284  401365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:28:10.322073  401365 command_runner.go:130] > Certificate will not expire
	I1210 06:28:10.322510  401365 kubeadm.go:401] StartCluster: {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:28:10.322592  401365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:28:10.322670  401365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:28:10.349813  401365 cri.go:89] found id: ""
	I1210 06:28:10.349915  401365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:28:10.357053  401365 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 06:28:10.357076  401365 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 06:28:10.357083  401365 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 06:28:10.358087  401365 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:28:10.358107  401365 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:28:10.358179  401365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:28:10.366355  401365 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:28:10.366773  401365 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-253997" does not appear in /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.366892  401365 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-362392/kubeconfig needs updating (will repair): [kubeconfig missing "functional-253997" cluster setting kubeconfig missing "functional-253997" context setting]
	I1210 06:28:10.367176  401365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/kubeconfig: {Name:mk64e356b1eb31ef8982d6b594101aff5f90a6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:10.367620  401365 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.367775  401365 kapi.go:59] client config for functional-253997: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key", CAFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:28:10.368328  401365 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:28:10.368348  401365 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:28:10.368357  401365 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:28:10.368361  401365 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:28:10.368366  401365 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:28:10.368683  401365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:28:10.368778  401365 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 06:28:10.376809  401365 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 06:28:10.376842  401365 kubeadm.go:602] duration metric: took 18.728652ms to restartPrimaryControlPlane
	I1210 06:28:10.376852  401365 kubeadm.go:403] duration metric: took 54.348915ms to StartCluster
	I1210 06:28:10.376867  401365 settings.go:142] acquiring lock: {Name:mk50f3096e55008942432e93c6cf4976a70e957e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:10.376930  401365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.377580  401365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/kubeconfig: {Name:mk64e356b1eb31ef8982d6b594101aff5f90a6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:28:10.377783  401365 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:28:10.378131  401365 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:28:10.378203  401365 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:28:10.378273  401365 addons.go:70] Setting storage-provisioner=true in profile "functional-253997"
	I1210 06:28:10.378288  401365 addons.go:239] Setting addon storage-provisioner=true in "functional-253997"
	I1210 06:28:10.378298  401365 addons.go:70] Setting default-storageclass=true in profile "functional-253997"
	I1210 06:28:10.378308  401365 host.go:66] Checking if "functional-253997" exists ...
	I1210 06:28:10.378325  401365 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-253997"
	I1210 06:28:10.378609  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:10.378772  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:10.382148  401365 out.go:179] * Verifying Kubernetes components...
	I1210 06:28:10.385829  401365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:28:10.411769  401365 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:28:10.411927  401365 kapi.go:59] client config for functional-253997: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key", CAFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:28:10.412189  401365 addons.go:239] Setting addon default-storageclass=true in "functional-253997"
	I1210 06:28:10.412217  401365 host.go:66] Checking if "functional-253997" exists ...
	I1210 06:28:10.412622  401365 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:28:10.423310  401365 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:28:10.429289  401365 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:10.429319  401365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:28:10.429390  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:10.437508  401365 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:10.437529  401365 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:28:10.437602  401365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:28:10.484090  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:10.489523  401365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:28:10.601993  401365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:28:10.611397  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:10.637290  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:11.377346  401365 node_ready.go:35] waiting up to 6m0s for node "functional-253997" to be "Ready" ...
	I1210 06:28:11.377544  401365 type.go:168] "Request Body" body=""
	I1210 06:28:11.377656  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.377728  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	W1210 06:28:11.377850  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.377894  401365 retry.go:31] will retry after 259.470683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.378104  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.378200  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.378242  401365 retry.go:31] will retry after 196.4073ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.378345  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:11.575829  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:11.638697  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:11.638779  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.638826  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.638871  401365 retry.go:31] will retry after 208.428392ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.692820  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.696338  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.696370  401365 retry.go:31] will retry after 282.781918ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.847619  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:11.878109  401365 type.go:168] "Request Body" body=""
	I1210 06:28:11.878199  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:11.878519  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:11.905645  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:11.908839  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.908880  401365 retry.go:31] will retry after 582.02813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:11.980121  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:12.039691  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:12.043135  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.043170  401365 retry.go:31] will retry after 432.314142ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.377666  401365 type.go:168] "Request Body" body=""
	I1210 06:28:12.377744  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:12.378081  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:12.476496  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:12.492099  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:12.562290  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:12.562336  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.562356  401365 retry.go:31] will retry after 1.009011504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.562409  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:12.562427  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.562433  401365 retry.go:31] will retry after 937.221861ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:12.877643  401365 type.go:168] "Request Body" body=""
	I1210 06:28:12.877787  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:12.878157  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:13.377677  401365 type.go:168] "Request Body" body=""
	I1210 06:28:13.377754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:13.378100  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:13.378160  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:13.500598  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:13.556443  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:13.560062  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.560116  401365 retry.go:31] will retry after 1.265541277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.572329  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:13.633856  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:13.637464  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.637509  401365 retry.go:31] will retry after 1.331173049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:13.877793  401365 type.go:168] "Request Body" body=""
	I1210 06:28:13.877888  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:13.878199  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:14.377638  401365 type.go:168] "Request Body" body=""
	I1210 06:28:14.377730  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:14.378082  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:14.825793  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:14.878190  401365 type.go:168] "Request Body" body=""
	I1210 06:28:14.878261  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:14.878521  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:14.884055  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:14.884152  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:14.884201  401365 retry.go:31] will retry after 1.396995132s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:14.969467  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:15.059973  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:15.064387  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:15.064489  401365 retry.go:31] will retry after 957.92161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:15.377700  401365 type.go:168] "Request Body" body=""
	I1210 06:28:15.377772  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:15.378126  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:15.378206  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:15.877555  401365 type.go:168] "Request Body" body=""
	I1210 06:28:15.877664  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:15.877987  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:16.023398  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:16.083212  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:16.083269  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.083288  401365 retry.go:31] will retry after 3.316582994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.281469  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:16.346229  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:16.346265  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.346285  401365 retry.go:31] will retry after 2.05295153s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:16.378603  401365 type.go:168] "Request Body" body=""
	I1210 06:28:16.378688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:16.379017  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:16.877615  401365 type.go:168] "Request Body" body=""
	I1210 06:28:16.877690  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:16.878019  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:17.377588  401365 type.go:168] "Request Body" body=""
	I1210 06:28:17.377663  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:17.377946  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:17.877651  401365 type.go:168] "Request Body" body=""
	I1210 06:28:17.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:17.878120  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:17.878201  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:18.377673  401365 type.go:168] "Request Body" body=""
	I1210 06:28:18.377749  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:18.378108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:18.400386  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:18.462469  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:18.462509  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:18.462528  401365 retry.go:31] will retry after 3.621738225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:18.877637  401365 type.go:168] "Request Body" body=""
	I1210 06:28:18.877719  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:18.878031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:19.377699  401365 type.go:168] "Request Body" body=""
	I1210 06:28:19.377775  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:19.378123  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:19.400389  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:19.462507  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:19.462542  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:19.462562  401365 retry.go:31] will retry after 6.347571238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:19.878220  401365 type.go:168] "Request Body" body=""
	I1210 06:28:19.878303  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:19.878573  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:19.878624  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:20.378571  401365 type.go:168] "Request Body" body=""
	I1210 06:28:20.378643  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:20.378957  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:20.877707  401365 type.go:168] "Request Body" body=""
	I1210 06:28:20.877781  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:20.878082  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:21.377732  401365 type.go:168] "Request Body" body=""
	I1210 06:28:21.377840  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:21.378217  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:21.877933  401365 type.go:168] "Request Body" body=""
	I1210 06:28:21.878005  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:21.878280  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:22.084823  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:22.150796  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:22.150852  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:22.150872  401365 retry.go:31] will retry after 8.518894464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:22.378239  401365 type.go:168] "Request Body" body=""
	I1210 06:28:22.378314  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:22.378638  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:22.378700  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:22.878392  401365 type.go:168] "Request Body" body=""
	I1210 06:28:22.878470  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:22.878811  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:23.378412  401365 type.go:168] "Request Body" body=""
	I1210 06:28:23.378493  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:23.378816  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:23.878580  401365 type.go:168] "Request Body" body=""
	I1210 06:28:23.878657  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:23.879035  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:24.377745  401365 type.go:168] "Request Body" body=""
	I1210 06:28:24.377829  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:24.378165  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:24.878042  401365 type.go:168] "Request Body" body=""
	I1210 06:28:24.878110  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:24.878379  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:24.878424  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:25.378073  401365 type.go:168] "Request Body" body=""
	I1210 06:28:25.378148  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:25.378492  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:25.811094  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:25.867131  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:25.870279  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:25.870312  401365 retry.go:31] will retry after 4.064346895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:25.878534  401365 type.go:168] "Request Body" body=""
	I1210 06:28:25.878610  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:25.878933  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:26.378357  401365 type.go:168] "Request Body" body=""
	I1210 06:28:26.378423  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:26.378728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:26.878539  401365 type.go:168] "Request Body" body=""
	I1210 06:28:26.878615  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:26.878903  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:26.878950  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:27.377638  401365 type.go:168] "Request Body" body=""
	I1210 06:28:27.377740  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:27.378052  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:27.878386  401365 type.go:168] "Request Body" body=""
	I1210 06:28:27.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:27.878757  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:28.378587  401365 type.go:168] "Request Body" body=""
	I1210 06:28:28.378670  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:28.379016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:28.877671  401365 type.go:168] "Request Body" body=""
	I1210 06:28:28.877745  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:28.878083  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:29.378412  401365 type.go:168] "Request Body" body=""
	I1210 06:28:29.378486  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:29.378756  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:29.378811  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:29.877691  401365 type.go:168] "Request Body" body=""
	I1210 06:28:29.877767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:29.878126  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:29.935383  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:29.993267  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:29.993316  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:29.993335  401365 retry.go:31] will retry after 13.293540925s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:30.377660  401365 type.go:168] "Request Body" body=""
	I1210 06:28:30.377733  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:30.378062  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:30.670723  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:30.731809  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:30.735358  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:30.735395  401365 retry.go:31] will retry after 6.439855049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:30.877636  401365 type.go:168] "Request Body" body=""
	I1210 06:28:30.877707  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:30.878037  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:31.377698  401365 type.go:168] "Request Body" body=""
	I1210 06:28:31.377773  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:31.378112  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:31.877691  401365 type.go:168] "Request Body" body=""
	I1210 06:28:31.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:31.878135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:31.878196  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:32.377829  401365 type.go:168] "Request Body" body=""
	I1210 06:28:32.377902  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:32.378167  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:32.877646  401365 type.go:168] "Request Body" body=""
	I1210 06:28:32.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:32.878081  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:33.377665  401365 type.go:168] "Request Body" body=""
	I1210 06:28:33.377750  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:33.378080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:33.878372  401365 type.go:168] "Request Body" body=""
	I1210 06:28:33.878447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:33.878725  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:33.878768  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:34.378621  401365 type.go:168] "Request Body" body=""
	I1210 06:28:34.378700  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:34.379046  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:34.877880  401365 type.go:168] "Request Body" body=""
	I1210 06:28:34.877952  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:34.878345  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:35.378044  401365 type.go:168] "Request Body" body=""
	I1210 06:28:35.378114  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:35.378389  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:35.878221  401365 type.go:168] "Request Body" body=""
	I1210 06:28:35.878303  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:35.878728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:35.878785  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:36.378584  401365 type.go:168] "Request Body" body=""
	I1210 06:28:36.378665  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:36.379008  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:36.878369  401365 type.go:168] "Request Body" body=""
	I1210 06:28:36.878444  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:36.878707  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:37.176405  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:37.232388  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:37.235885  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:37.235920  401365 retry.go:31] will retry after 10.78688793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:37.378207  401365 type.go:168] "Request Body" body=""
	I1210 06:28:37.378282  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:37.378581  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:37.878419  401365 type.go:168] "Request Body" body=""
	I1210 06:28:37.878495  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:37.878813  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:37.878863  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:38.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:28:38.378474  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:38.378754  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:38.878544  401365 type.go:168] "Request Body" body=""
	I1210 06:28:38.878622  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:38.878987  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:39.377713  401365 type.go:168] "Request Body" body=""
	I1210 06:28:39.377797  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:39.378129  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:39.878083  401365 type.go:168] "Request Body" body=""
	I1210 06:28:39.878150  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:39.878429  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:40.378442  401365 type.go:168] "Request Body" body=""
	I1210 06:28:40.378523  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:40.378856  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:40.378911  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:40.877583  401365 type.go:168] "Request Body" body=""
	I1210 06:28:40.877679  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:40.878034  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:41.378374  401365 type.go:168] "Request Body" body=""
	I1210 06:28:41.378447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:41.378715  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:41.878491  401365 type.go:168] "Request Body" body=""
	I1210 06:28:41.878566  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:41.878923  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:42.377671  401365 type.go:168] "Request Body" body=""
	I1210 06:28:42.377751  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:42.378141  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:42.877599  401365 type.go:168] "Request Body" body=""
	I1210 06:28:42.877683  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:42.877945  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:42.877984  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:43.287649  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:43.346928  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:43.346975  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:43.346995  401365 retry.go:31] will retry after 14.625741063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:43.378235  401365 type.go:168] "Request Body" body=""
	I1210 06:28:43.378315  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:43.378642  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:43.878431  401365 type.go:168] "Request Body" body=""
	I1210 06:28:43.878526  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:43.878848  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:44.378341  401365 type.go:168] "Request Body" body=""
	I1210 06:28:44.378412  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:44.378674  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:44.877586  401365 type.go:168] "Request Body" body=""
	I1210 06:28:44.877680  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:44.878028  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:44.878086  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:45.377798  401365 type.go:168] "Request Body" body=""
	I1210 06:28:45.377879  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:45.378245  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:45.878503  401365 type.go:168] "Request Body" body=""
	I1210 06:28:45.878572  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:45.878831  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:46.378595  401365 type.go:168] "Request Body" body=""
	I1210 06:28:46.378672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:46.378982  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:46.877682  401365 type.go:168] "Request Body" body=""
	I1210 06:28:46.877759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:46.878095  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:46.878155  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:47.377841  401365 type.go:168] "Request Body" body=""
	I1210 06:28:47.377917  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:47.378263  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:47.877992  401365 type.go:168] "Request Body" body=""
	I1210 06:28:47.878078  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:47.878438  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:48.023828  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:28:48.081536  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:48.084895  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:48.084933  401365 retry.go:31] will retry after 18.097374996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:48.378332  401365 type.go:168] "Request Body" body=""
	I1210 06:28:48.378422  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:48.378753  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:48.878419  401365 type.go:168] "Request Body" body=""
	I1210 06:28:48.878497  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:48.878762  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:48.878816  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:49.378574  401365 type.go:168] "Request Body" body=""
	I1210 06:28:49.378648  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:49.378945  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:49.877700  401365 type.go:168] "Request Body" body=""
	I1210 06:28:49.877800  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:49.878143  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:50.377920  401365 type.go:168] "Request Body" body=""
	I1210 06:28:50.377988  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:50.378294  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:50.877693  401365 type.go:168] "Request Body" body=""
	I1210 06:28:50.877785  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:50.878095  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:51.377686  401365 type.go:168] "Request Body" body=""
	I1210 06:28:51.377791  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:51.378134  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:51.378207  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:51.877781  401365 type.go:168] "Request Body" body=""
	I1210 06:28:51.877851  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:51.878166  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:52.377911  401365 type.go:168] "Request Body" body=""
	I1210 06:28:52.377995  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:52.378322  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:52.878024  401365 type.go:168] "Request Body" body=""
	I1210 06:28:52.878097  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:52.878439  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:53.377622  401365 type.go:168] "Request Body" body=""
	I1210 06:28:53.377713  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:53.378024  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:53.877755  401365 type.go:168] "Request Body" body=""
	I1210 06:28:53.877852  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:53.878190  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:53.878248  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:54.377697  401365 type.go:168] "Request Body" body=""
	I1210 06:28:54.377777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:54.378108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:54.877974  401365 type.go:168] "Request Body" body=""
	I1210 06:28:54.878043  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:54.878312  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:55.378006  401365 type.go:168] "Request Body" body=""
	I1210 06:28:55.378086  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:55.378481  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:55.878103  401365 type.go:168] "Request Body" body=""
	I1210 06:28:55.878195  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:55.878572  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:55.878630  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:56.378220  401365 type.go:168] "Request Body" body=""
	I1210 06:28:56.378297  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:56.378560  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:56.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:28:56.878464  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:56.878801  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:57.378592  401365 type.go:168] "Request Body" body=""
	I1210 06:28:57.378672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:57.379008  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:57.877621  401365 type.go:168] "Request Body" body=""
	I1210 06:28:57.877696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:57.878001  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:57.973321  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:28:58.030522  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:28:58.034296  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:58.034334  401365 retry.go:31] will retry after 29.63385811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:28:58.377818  401365 type.go:168] "Request Body" body=""
	I1210 06:28:58.377897  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:58.378240  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:28:58.378316  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:28:58.878004  401365 type.go:168] "Request Body" body=""
	I1210 06:28:58.878100  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:58.878429  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:59.378237  401365 type.go:168] "Request Body" body=""
	I1210 06:28:59.378307  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:59.378610  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:28:59.878397  401365 type.go:168] "Request Body" body=""
	I1210 06:28:59.878486  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:28:59.878865  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:00.377830  401365 type.go:168] "Request Body" body=""
	I1210 06:29:00.377911  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:00.378308  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:00.378388  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:00.877903  401365 type.go:168] "Request Body" body=""
	I1210 06:29:00.877979  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:00.878324  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:01.378045  401365 type.go:168] "Request Body" body=""
	I1210 06:29:01.378142  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:01.378492  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:01.878290  401365 type.go:168] "Request Body" body=""
	I1210 06:29:01.878364  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:01.878682  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:02.378481  401365 type.go:168] "Request Body" body=""
	I1210 06:29:02.378563  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:02.378938  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:02.379007  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:02.877673  401365 type.go:168] "Request Body" body=""
	I1210 06:29:02.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:02.878144  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:03.378397  401365 type.go:168] "Request Body" body=""
	I1210 06:29:03.378485  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:03.378752  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:03.878546  401365 type.go:168] "Request Body" body=""
	I1210 06:29:03.878622  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:03.878936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:04.377644  401365 type.go:168] "Request Body" body=""
	I1210 06:29:04.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:04.378092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:04.877915  401365 type.go:168] "Request Body" body=""
	I1210 06:29:04.877993  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:04.878265  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:04.878310  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:05.377970  401365 type.go:168] "Request Body" body=""
	I1210 06:29:05.378056  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:05.378385  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:05.877707  401365 type.go:168] "Request Body" body=""
	I1210 06:29:05.877783  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:05.878096  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:06.182558  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:29:06.240148  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:06.243928  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:29:06.243964  401365 retry.go:31] will retry after 43.852698404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 06:29:06.378184  401365 type.go:168] "Request Body" body=""
	I1210 06:29:06.378259  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:06.378534  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:06.878434  401365 type.go:168] "Request Body" body=""
	I1210 06:29:06.878516  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:06.878892  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:06.878963  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:07.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:29:07.377787  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:07.378114  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:07.878358  401365 type.go:168] "Request Body" body=""
	I1210 06:29:07.878442  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:07.878707  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:08.378589  401365 type.go:168] "Request Body" body=""
	I1210 06:29:08.378685  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:08.379006  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:08.877738  401365 type.go:168] "Request Body" body=""
	I1210 06:29:08.877836  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:08.878152  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:09.377599  401365 type.go:168] "Request Body" body=""
	I1210 06:29:09.377678  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:09.377964  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:09.378009  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:09.878613  401365 type.go:168] "Request Body" body=""
	I1210 06:29:09.878706  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:09.879055  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:10.377636  401365 type.go:168] "Request Body" body=""
	I1210 06:29:10.377729  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:10.378057  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:10.878414  401365 type.go:168] "Request Body" body=""
	I1210 06:29:10.878485  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:10.878853  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:11.377614  401365 type.go:168] "Request Body" body=""
	I1210 06:29:11.377691  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:11.378087  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:11.378157  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:11.877843  401365 type.go:168] "Request Body" body=""
	I1210 06:29:11.877917  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:11.878206  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:12.377859  401365 type.go:168] "Request Body" body=""
	I1210 06:29:12.377937  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:12.378284  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:12.877669  401365 type.go:168] "Request Body" body=""
	I1210 06:29:12.877752  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:12.878075  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:13.377664  401365 type.go:168] "Request Body" body=""
	I1210 06:29:13.377747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:13.378093  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:13.878416  401365 type.go:168] "Request Body" body=""
	I1210 06:29:13.878494  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:13.878809  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:13.878870  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:14.377568  401365 type.go:168] "Request Body" body=""
	I1210 06:29:14.377643  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:14.377998  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:14.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:29:14.877755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:14.878113  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:15.377598  401365 type.go:168] "Request Body" body=""
	I1210 06:29:15.377677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:15.377997  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:15.877667  401365 type.go:168] "Request Body" body=""
	I1210 06:29:15.877746  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:15.878091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:16.377666  401365 type.go:168] "Request Body" body=""
	I1210 06:29:16.377769  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:16.378076  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:16.378122  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:16.877619  401365 type.go:168] "Request Body" body=""
	I1210 06:29:16.877702  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:16.878021  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:17.377665  401365 type.go:168] "Request Body" body=""
	I1210 06:29:17.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:17.378090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:17.877642  401365 type.go:168] "Request Body" body=""
	I1210 06:29:17.877734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:17.878072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:18.378371  401365 type.go:168] "Request Body" body=""
	I1210 06:29:18.378462  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:18.378766  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:18.378828  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:18.878580  401365 type.go:168] "Request Body" body=""
	I1210 06:29:18.878658  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:18.879021  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:19.377663  401365 type.go:168] "Request Body" body=""
	I1210 06:29:19.377746  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:19.378079  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:19.877904  401365 type.go:168] "Request Body" body=""
	I1210 06:29:19.878012  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:19.878270  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:20.378288  401365 type.go:168] "Request Body" body=""
	I1210 06:29:20.378362  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:20.378707  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:20.878519  401365 type.go:168] "Request Body" body=""
	I1210 06:29:20.878594  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:20.878915  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:20.878972  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:21.377633  401365 type.go:168] "Request Body" body=""
	I1210 06:29:21.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:21.378102  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:21.877674  401365 type.go:168] "Request Body" body=""
	I1210 06:29:21.877777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:21.878104  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:22.377691  401365 type.go:168] "Request Body" body=""
	I1210 06:29:22.377786  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:22.378137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:22.877604  401365 type.go:168] "Request Body" body=""
	I1210 06:29:22.877691  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:22.877964  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:23.377675  401365 type.go:168] "Request Body" body=""
	I1210 06:29:23.377762  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:23.378116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:23.378205  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:23.877699  401365 type.go:168] "Request Body" body=""
	I1210 06:29:23.877817  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:23.878164  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:24.377849  401365 type.go:168] "Request Body" body=""
	I1210 06:29:24.377937  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:24.378276  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:24.878345  401365 type.go:168] "Request Body" body=""
	I1210 06:29:24.878419  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:24.878834  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:25.378514  401365 type.go:168] "Request Body" body=""
	I1210 06:29:25.378602  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:25.378940  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:25.378995  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:25.878340  401365 type.go:168] "Request Body" body=""
	I1210 06:29:25.878408  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:25.878688  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:26.378495  401365 type.go:168] "Request Body" body=""
	I1210 06:29:26.378583  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:26.378915  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:26.877643  401365 type.go:168] "Request Body" body=""
	I1210 06:29:26.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:26.878074  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:27.378388  401365 type.go:168] "Request Body" body=""
	I1210 06:29:27.378458  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:27.378728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:27.669323  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:29:27.726986  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:27.731088  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:27.731190  401365 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:29:27.878451  401365 type.go:168] "Request Body" body=""
	I1210 06:29:27.878523  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:27.878853  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:27.878910  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:28.378489  401365 type.go:168] "Request Body" body=""
	I1210 06:29:28.378564  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:28.378901  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:28.878380  401365 type.go:168] "Request Body" body=""
	I1210 06:29:28.878458  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:28.878719  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:29.378449  401365 type.go:168] "Request Body" body=""
	I1210 06:29:29.378529  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:29.378849  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:29.877584  401365 type.go:168] "Request Body" body=""
	I1210 06:29:29.877661  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:29.878000  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:30.377937  401365 type.go:168] "Request Body" body=""
	I1210 06:29:30.378012  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:30.378326  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:30.378387  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:30.877919  401365 type.go:168] "Request Body" body=""
	I1210 06:29:30.878019  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:30.878352  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:31.377915  401365 type.go:168] "Request Body" body=""
	I1210 06:29:31.378002  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:31.378351  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:31.878025  401365 type.go:168] "Request Body" body=""
	I1210 06:29:31.878128  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:31.878461  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:32.378235  401365 type.go:168] "Request Body" body=""
	I1210 06:29:32.378305  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:32.378637  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:32.378712  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:32.878497  401365 type.go:168] "Request Body" body=""
	I1210 06:29:32.878570  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:32.878897  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:33.378428  401365 type.go:168] "Request Body" body=""
	I1210 06:29:33.378500  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:33.378769  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:33.877562  401365 type.go:168] "Request Body" body=""
	I1210 06:29:33.877640  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:33.877963  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:34.377713  401365 type.go:168] "Request Body" body=""
	I1210 06:29:34.377821  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:34.378167  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:34.877924  401365 type.go:168] "Request Body" body=""
	I1210 06:29:34.877993  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:34.878306  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:34.878365  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:35.378234  401365 type.go:168] "Request Body" body=""
	I1210 06:29:35.378332  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:35.378669  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:35.878465  401365 type.go:168] "Request Body" body=""
	I1210 06:29:35.878539  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:35.878861  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:36.378415  401365 type.go:168] "Request Body" body=""
	I1210 06:29:36.378520  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:36.378846  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:36.877600  401365 type.go:168] "Request Body" body=""
	I1210 06:29:36.877689  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:36.878017  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:37.377713  401365 type.go:168] "Request Body" body=""
	I1210 06:29:37.377800  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:37.378154  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:37.378223  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:37.878379  401365 type.go:168] "Request Body" body=""
	I1210 06:29:37.878466  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:37.878806  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:38.378634  401365 type.go:168] "Request Body" body=""
	I1210 06:29:38.378721  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:38.379089  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:38.877647  401365 type.go:168] "Request Body" body=""
	I1210 06:29:38.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:38.878098  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:39.377834  401365 type.go:168] "Request Body" body=""
	I1210 06:29:39.377905  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:39.378160  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:39.878109  401365 type.go:168] "Request Body" body=""
	I1210 06:29:39.878184  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:39.878538  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:39.878595  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:40.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:29:40.378476  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:40.378793  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:40.878462  401365 type.go:168] "Request Body" body=""
	I1210 06:29:40.878582  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:40.878971  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:41.377652  401365 type.go:168] "Request Body" body=""
	I1210 06:29:41.377732  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:41.378135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:41.877884  401365 type.go:168] "Request Body" body=""
	I1210 06:29:41.877962  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:41.878325  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:42.377611  401365 type.go:168] "Request Body" body=""
	I1210 06:29:42.377693  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:42.378065  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:42.378123  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:42.877666  401365 type.go:168] "Request Body" body=""
	I1210 06:29:42.877738  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:42.878090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:43.377808  401365 type.go:168] "Request Body" body=""
	I1210 06:29:43.377882  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:43.378222  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:43.877625  401365 type.go:168] "Request Body" body=""
	I1210 06:29:43.877697  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:43.877990  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:44.377652  401365 type.go:168] "Request Body" body=""
	I1210 06:29:44.377728  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:44.378065  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:44.877919  401365 type.go:168] "Request Body" body=""
	I1210 06:29:44.878017  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:44.878351  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:44.878422  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:45.378184  401365 type.go:168] "Request Body" body=""
	I1210 06:29:45.378259  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:45.378530  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:45.878292  401365 type.go:168] "Request Body" body=""
	I1210 06:29:45.878369  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:45.878717  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:46.378381  401365 type.go:168] "Request Body" body=""
	I1210 06:29:46.378455  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:46.378778  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:46.878419  401365 type.go:168] "Request Body" body=""
	I1210 06:29:46.878504  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:46.878818  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:46.878868  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:47.377582  401365 type.go:168] "Request Body" body=""
	I1210 06:29:47.377662  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:47.378008  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:47.878425  401365 type.go:168] "Request Body" body=""
	I1210 06:29:47.878508  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:47.878839  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:48.378385  401365 type.go:168] "Request Body" body=""
	I1210 06:29:48.378477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:48.378769  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:48.878588  401365 type.go:168] "Request Body" body=""
	I1210 06:29:48.878669  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:48.878986  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:48.879047  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:49.377711  401365 type.go:168] "Request Body" body=""
	I1210 06:29:49.377790  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:49.378153  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:49.878038  401365 type.go:168] "Request Body" body=""
	I1210 06:29:49.878111  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:49.878364  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:50.096947  401365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:29:50.160267  401365 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:50.160316  401365 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 06:29:50.160396  401365 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 06:29:50.163553  401365 out.go:179] * Enabled addons: 
	I1210 06:29:50.167218  401365 addons.go:530] duration metric: took 1m39.789022145s for enable addons: enabled=[]
	I1210 06:29:50.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:29:50.377718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:50.378070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:50.877691  401365 type.go:168] "Request Body" body=""
	I1210 06:29:50.877785  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:50.878103  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:51.378394  401365 type.go:168] "Request Body" body=""
	I1210 06:29:51.378472  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:51.378746  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:51.378813  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:51.878588  401365 type.go:168] "Request Body" body=""
	I1210 06:29:51.878669  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:51.878981  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:52.377564  401365 type.go:168] "Request Body" body=""
	I1210 06:29:52.377654  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:52.378002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:52.878385  401365 type.go:168] "Request Body" body=""
	I1210 06:29:52.878456  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:52.878735  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:53.378623  401365 type.go:168] "Request Body" body=""
	I1210 06:29:53.378696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:53.379007  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:53.379062  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:53.877727  401365 type.go:168] "Request Body" body=""
	I1210 06:29:53.877818  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:53.878163  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:54.377608  401365 type.go:168] "Request Body" body=""
	I1210 06:29:54.377697  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:54.378015  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:54.877713  401365 type.go:168] "Request Body" body=""
	I1210 06:29:54.877810  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:54.878175  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:55.377895  401365 type.go:168] "Request Body" body=""
	I1210 06:29:55.377968  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:55.378309  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:55.877991  401365 type.go:168] "Request Body" body=""
	I1210 06:29:55.878064  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:55.878416  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:55.878476  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:56.378216  401365 type.go:168] "Request Body" body=""
	I1210 06:29:56.378295  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:56.378666  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:56.878479  401365 type.go:168] "Request Body" body=""
	I1210 06:29:56.878557  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:56.878903  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:57.378397  401365 type.go:168] "Request Body" body=""
	I1210 06:29:57.378477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:57.378742  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:57.878382  401365 type.go:168] "Request Body" body=""
	I1210 06:29:57.878456  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:57.878755  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:29:57.878801  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:29:58.378559  401365 type.go:168] "Request Body" body=""
	I1210 06:29:58.378645  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:58.378936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:58.877600  401365 type.go:168] "Request Body" body=""
	I1210 06:29:58.877684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:58.877957  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:59.377641  401365 type.go:168] "Request Body" body=""
	I1210 06:29:59.377718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:59.378043  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:29:59.878036  401365 type.go:168] "Request Body" body=""
	I1210 06:29:59.878111  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:29:59.878453  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:00.403040  401365 type.go:168] "Request Body" body=""
	I1210 06:30:00.403489  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:00.403971  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:00.404065  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:00.877628  401365 type.go:168] "Request Body" body=""
	I1210 06:30:00.877715  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:00.878111  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:01.378405  401365 type.go:168] "Request Body" body=""
	I1210 06:30:01.378490  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:01.378858  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:01.878587  401365 type.go:168] "Request Body" body=""
	I1210 06:30:01.878670  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:01.879048  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:02.377809  401365 type.go:168] "Request Body" body=""
	I1210 06:30:02.377884  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:02.378218  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:02.877618  401365 type.go:168] "Request Body" body=""
	I1210 06:30:02.877691  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:02.877969  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:02.878012  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:03.377736  401365 type.go:168] "Request Body" body=""
	I1210 06:30:03.377819  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:03.378180  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:03.877919  401365 type.go:168] "Request Body" body=""
	I1210 06:30:03.878016  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:03.878393  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:04.378222  401365 type.go:168] "Request Body" body=""
	I1210 06:30:04.378313  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:04.378635  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:04.878431  401365 type.go:168] "Request Body" body=""
	I1210 06:30:04.878505  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:04.879753  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1210 06:30:04.879813  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:05.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:30:05.378482  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:05.378830  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:05.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:30:05.878480  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:05.878741  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:06.378628  401365 type.go:168] "Request Body" body=""
	I1210 06:30:06.378703  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:06.379023  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:06.877690  401365 type.go:168] "Request Body" body=""
	I1210 06:30:06.877764  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:06.878119  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:07.377808  401365 type.go:168] "Request Body" body=""
	I1210 06:30:07.377895  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:07.378245  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:07.378302  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:07.877669  401365 type.go:168] "Request Body" body=""
	I1210 06:30:07.877760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:07.878098  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:08.377849  401365 type.go:168] "Request Body" body=""
	I1210 06:30:08.377929  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:08.378272  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:08.877616  401365 type.go:168] "Request Body" body=""
	I1210 06:30:08.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:08.878027  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:09.378016  401365 type.go:168] "Request Body" body=""
	I1210 06:30:09.378098  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:09.378433  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:09.378480  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:09.878345  401365 type.go:168] "Request Body" body=""
	I1210 06:30:09.878427  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:09.878801  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:10.378580  401365 type.go:168] "Request Body" body=""
	I1210 06:30:10.378704  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:10.379089  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:10.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:30:10.877745  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:10.878101  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:11.377836  401365 type.go:168] "Request Body" body=""
	I1210 06:30:11.377918  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:11.378278  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:11.877990  401365 type.go:168] "Request Body" body=""
	I1210 06:30:11.878058  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:11.878328  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:11.878370  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:12.377682  401365 type.go:168] "Request Body" body=""
	I1210 06:30:12.377762  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:12.378131  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:12.877864  401365 type.go:168] "Request Body" body=""
	I1210 06:30:12.877940  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:12.878290  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:13.377986  401365 type.go:168] "Request Body" body=""
	I1210 06:30:13.378060  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:13.378390  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:13.878180  401365 type.go:168] "Request Body" body=""
	I1210 06:30:13.878256  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:13.878586  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:13.878648  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:14.378401  401365 type.go:168] "Request Body" body=""
	I1210 06:30:14.378479  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:14.378827  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:14.878395  401365 type.go:168] "Request Body" body=""
	I1210 06:30:14.878479  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:14.878758  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:15.378543  401365 type.go:168] "Request Body" body=""
	I1210 06:30:15.378623  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:15.378945  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:15.877679  401365 type.go:168] "Request Body" body=""
	I1210 06:30:15.877759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:15.878101  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:16.377593  401365 type.go:168] "Request Body" body=""
	I1210 06:30:16.377664  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:16.377962  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:16.378009  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:16.877684  401365 type.go:168] "Request Body" body=""
	I1210 06:30:16.877760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:16.878132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:17.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:30:17.377724  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:17.378099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:17.877591  401365 type.go:168] "Request Body" body=""
	I1210 06:30:17.877703  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:17.878030  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:18.377710  401365 type.go:168] "Request Body" body=""
	I1210 06:30:18.377789  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:18.378142  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:18.378208  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:18.877756  401365 type.go:168] "Request Body" body=""
	I1210 06:30:18.877843  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:18.878196  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:19.377801  401365 type.go:168] "Request Body" body=""
	I1210 06:30:19.377880  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:19.378158  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:19.878182  401365 type.go:168] "Request Body" body=""
	I1210 06:30:19.878260  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:19.878613  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:20.378479  401365 type.go:168] "Request Body" body=""
	I1210 06:30:20.378562  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:20.378922  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:20.378995  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:20.878437  401365 type.go:168] "Request Body" body=""
	I1210 06:30:20.878515  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:20.878801  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:21.378593  401365 type.go:168] "Request Body" body=""
	I1210 06:30:21.378678  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:21.379014  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:21.877727  401365 type.go:168] "Request Body" body=""
	I1210 06:30:21.877805  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:21.878139  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:22.377644  401365 type.go:168] "Request Body" body=""
	I1210 06:30:22.377720  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:22.378036  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:22.877631  401365 type.go:168] "Request Body" body=""
	I1210 06:30:22.877708  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:22.878077  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:22.878133  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:23.377664  401365 type.go:168] "Request Body" body=""
	I1210 06:30:23.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:23.378132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:23.877609  401365 type.go:168] "Request Body" body=""
	I1210 06:30:23.877684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:23.878013  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:24.377728  401365 type.go:168] "Request Body" body=""
	I1210 06:30:24.377803  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:24.378189  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:24.878072  401365 type.go:168] "Request Body" body=""
	I1210 06:30:24.878208  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:24.878537  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:24.878592  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:25.378359  401365 type.go:168] "Request Body" body=""
	I1210 06:30:25.378444  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:25.378710  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:25.878517  401365 type.go:168] "Request Body" body=""
	I1210 06:30:25.878613  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:25.878961  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:26.377659  401365 type.go:168] "Request Body" body=""
	I1210 06:30:26.377737  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:26.378086  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:26.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:30:26.878468  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:26.878744  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:26.878791  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:27.378535  401365 type.go:168] "Request Body" body=""
	I1210 06:30:27.378611  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:27.378947  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:27.877649  401365 type.go:168] "Request Body" body=""
	I1210 06:30:27.877732  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:27.878085  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:28.377643  401365 type.go:168] "Request Body" body=""
	I1210 06:30:28.377718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:28.378171  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:28.877894  401365 type.go:168] "Request Body" body=""
	I1210 06:30:28.877977  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:28.878324  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:29.378072  401365 type.go:168] "Request Body" body=""
	I1210 06:30:29.378156  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:29.378530  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:29.378586  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:29.878257  401365 type.go:168] "Request Body" body=""
	I1210 06:30:29.878331  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:29.878620  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:30.377624  401365 type.go:168] "Request Body" body=""
	I1210 06:30:30.377719  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:30.378103  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:30.877807  401365 type.go:168] "Request Body" body=""
	I1210 06:30:30.877939  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:30.878264  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:31.377983  401365 type.go:168] "Request Body" body=""
	I1210 06:30:31.378059  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:31.378337  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:31.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:30:31.877748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:31.878104  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:31.878164  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:32.377881  401365 type.go:168] "Request Body" body=""
	I1210 06:30:32.377966  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:32.378312  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:32.877995  401365 type.go:168] "Request Body" body=""
	I1210 06:30:32.878071  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:32.878437  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:33.378235  401365 type.go:168] "Request Body" body=""
	I1210 06:30:33.378311  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:33.378664  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:33.878393  401365 type.go:168] "Request Body" body=""
	I1210 06:30:33.878477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:33.878789  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:33.878839  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:34.378385  401365 type.go:168] "Request Body" body=""
	I1210 06:30:34.378460  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:34.378856  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:34.877875  401365 type.go:168] "Request Body" body=""
	I1210 06:30:34.877953  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:34.878307  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:35.377692  401365 type.go:168] "Request Body" body=""
	I1210 06:30:35.377807  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:35.378225  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:35.877600  401365 type.go:168] "Request Body" body=""
	I1210 06:30:35.877677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:35.878020  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:36.377715  401365 type.go:168] "Request Body" body=""
	I1210 06:30:36.377791  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:36.378143  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:36.378205  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:36.877696  401365 type.go:168] "Request Body" body=""
	I1210 06:30:36.877774  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:36.878137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:37.378398  401365 type.go:168] "Request Body" body=""
	I1210 06:30:37.378477  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:37.378746  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:37.878553  401365 type.go:168] "Request Body" body=""
	I1210 06:30:37.878672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:37.879091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:38.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:30:38.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:38.378109  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:38.877617  401365 type.go:168] "Request Body" body=""
	I1210 06:30:38.877690  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:38.877965  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:38.878020  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:39.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:30:39.377729  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:39.378078  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:39.877852  401365 type.go:168] "Request Body" body=""
	I1210 06:30:39.878005  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:39.878296  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:40.378297  401365 type.go:168] "Request Body" body=""
	I1210 06:30:40.378419  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:40.378746  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:40.878609  401365 type.go:168] "Request Body" body=""
	I1210 06:30:40.878695  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:40.879047  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:40.879109  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:41.377677  401365 type.go:168] "Request Body" body=""
	I1210 06:30:41.377761  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:41.378136  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:41.877816  401365 type.go:168] "Request Body" body=""
	I1210 06:30:41.877897  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:41.878247  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:42.377681  401365 type.go:168] "Request Body" body=""
	I1210 06:30:42.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:42.378160  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:42.877905  401365 type.go:168] "Request Body" body=""
	I1210 06:30:42.877988  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:42.878334  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:43.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:30:43.377686  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:43.378002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:43.378054  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:43.877669  401365 type.go:168] "Request Body" body=""
	I1210 06:30:43.877751  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:43.878145  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:44.377872  401365 type.go:168] "Request Body" body=""
	I1210 06:30:44.377977  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:44.378341  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:44.878225  401365 type.go:168] "Request Body" body=""
	I1210 06:30:44.878299  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:44.878563  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:45.378360  401365 type.go:168] "Request Body" body=""
	I1210 06:30:45.378435  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:45.378860  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:45.378937  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:45.878557  401365 type.go:168] "Request Body" body=""
	I1210 06:30:45.878640  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:45.878996  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:46.378357  401365 type.go:168] "Request Body" body=""
	I1210 06:30:46.378429  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:46.378738  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:46.878533  401365 type.go:168] "Request Body" body=""
	I1210 06:30:46.878610  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:46.878947  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:47.377691  401365 type.go:168] "Request Body" body=""
	I1210 06:30:47.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:47.378135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:47.878384  401365 type.go:168] "Request Body" body=""
	I1210 06:30:47.878498  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:47.878783  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:47.878827  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:48.378583  401365 type.go:168] "Request Body" body=""
	I1210 06:30:48.378670  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:48.379006  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:48.877596  401365 type.go:168] "Request Body" body=""
	I1210 06:30:48.877674  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:48.878023  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:49.377609  401365 type.go:168] "Request Body" body=""
	I1210 06:30:49.377685  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:49.377965  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:49.877909  401365 type.go:168] "Request Body" body=""
	I1210 06:30:49.877985  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:49.878310  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:50.378111  401365 type.go:168] "Request Body" body=""
	I1210 06:30:50.378203  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:50.378557  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:50.378619  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:50.878363  401365 type.go:168] "Request Body" body=""
	I1210 06:30:50.878438  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:50.878702  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:51.378562  401365 type.go:168] "Request Body" body=""
	I1210 06:30:51.378644  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:51.378985  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:51.877673  401365 type.go:168] "Request Body" body=""
	I1210 06:30:51.877755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:51.878129  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:52.377603  401365 type.go:168] "Request Body" body=""
	I1210 06:30:52.377672  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:52.377985  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:52.877662  401365 type.go:168] "Request Body" body=""
	I1210 06:30:52.877741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:52.878113  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:52.878172  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:53.377842  401365 type.go:168] "Request Body" body=""
	I1210 06:30:53.377929  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:53.378271  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:53.877988  401365 type.go:168] "Request Body" body=""
	I1210 06:30:53.878059  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:53.878397  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:54.378229  401365 type.go:168] "Request Body" body=""
	I1210 06:30:54.378302  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:54.378632  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:54.878381  401365 type.go:168] "Request Body" body=""
	I1210 06:30:54.878460  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:54.878809  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:54.878867  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:55.378406  401365 type.go:168] "Request Body" body=""
	I1210 06:30:55.378491  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:55.378761  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:55.878532  401365 type.go:168] "Request Body" body=""
	I1210 06:30:55.878631  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:55.878979  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:56.377687  401365 type.go:168] "Request Body" body=""
	I1210 06:30:56.377765  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:56.378102  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:56.878412  401365 type.go:168] "Request Body" body=""
	I1210 06:30:56.878480  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:56.878765  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:57.378590  401365 type.go:168] "Request Body" body=""
	I1210 06:30:57.378667  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:57.379010  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:57.379066  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:30:57.877659  401365 type.go:168] "Request Body" body=""
	I1210 06:30:57.877736  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:57.878094  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:58.377804  401365 type.go:168] "Request Body" body=""
	I1210 06:30:58.377882  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:58.378161  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:58.877653  401365 type.go:168] "Request Body" body=""
	I1210 06:30:58.877724  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:58.878038  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:59.377678  401365 type.go:168] "Request Body" body=""
	I1210 06:30:59.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:59.378090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:30:59.878022  401365 type.go:168] "Request Body" body=""
	I1210 06:30:59.878105  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:30:59.878446  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:30:59.878509  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:00.377586  401365 type.go:168] "Request Body" body=""
	I1210 06:31:00.377680  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:00.378151  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:00.877892  401365 type.go:168] "Request Body" body=""
	I1210 06:31:00.877975  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:00.878336  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:01.377928  401365 type.go:168] "Request Body" body=""
	I1210 06:31:01.378000  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:01.378269  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:01.877906  401365 type.go:168] "Request Body" body=""
	I1210 06:31:01.877996  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:01.878438  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:02.377746  401365 type.go:168] "Request Body" body=""
	I1210 06:31:02.377823  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:02.378191  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:02.378256  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:02.878389  401365 type.go:168] "Request Body" body=""
	I1210 06:31:02.878461  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:02.878756  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:03.378549  401365 type.go:168] "Request Body" body=""
	I1210 06:31:03.378628  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:03.378977  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:03.877675  401365 type.go:168] "Request Body" body=""
	I1210 06:31:03.877754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:03.878104  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:04.377643  401365 type.go:168] "Request Body" body=""
	I1210 06:31:04.377719  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:04.378069  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:04.878124  401365 type.go:168] "Request Body" body=""
	I1210 06:31:04.878218  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:04.878572  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:04.878635  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:05.378399  401365 type.go:168] "Request Body" body=""
	I1210 06:31:05.378481  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:05.378786  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:05.878376  401365 type.go:168] "Request Body" body=""
	I1210 06:31:05.878447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:05.878782  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:06.378579  401365 type.go:168] "Request Body" body=""
	I1210 06:31:06.378655  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:06.379033  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:06.877752  401365 type.go:168] "Request Body" body=""
	I1210 06:31:06.877828  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:06.878145  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:07.377614  401365 type.go:168] "Request Body" body=""
	I1210 06:31:07.377703  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:07.378053  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:07.378103  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:07.877679  401365 type.go:168] "Request Body" body=""
	I1210 06:31:07.877774  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:07.878132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:08.377692  401365 type.go:168] "Request Body" body=""
	I1210 06:31:08.377773  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:08.378135  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:08.877811  401365 type.go:168] "Request Body" body=""
	I1210 06:31:08.877884  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:08.878180  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:09.377668  401365 type.go:168] "Request Body" body=""
	I1210 06:31:09.377754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:09.378101  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:09.378155  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:09.877923  401365 type.go:168] "Request Body" body=""
	I1210 06:31:09.877999  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:09.878321  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:10.378307  401365 type.go:168] "Request Body" body=""
	I1210 06:31:10.378386  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:10.378650  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:10.878423  401365 type.go:168] "Request Body" body=""
	I1210 06:31:10.878500  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:10.878869  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:11.378503  401365 type.go:168] "Request Body" body=""
	I1210 06:31:11.378584  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:11.378952  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:11.379008  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:11.878378  401365 type.go:168] "Request Body" body=""
	I1210 06:31:11.878450  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:11.878715  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:12.378514  401365 type.go:168] "Request Body" body=""
	I1210 06:31:12.378591  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:12.378905  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:12.877633  401365 type.go:168] "Request Body" body=""
	I1210 06:31:12.877711  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:12.878080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:13.378362  401365 type.go:168] "Request Body" body=""
	I1210 06:31:13.378431  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:13.378728  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:13.878515  401365 type.go:168] "Request Body" body=""
	I1210 06:31:13.878592  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:13.878918  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:13.878976  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:14.377681  401365 type.go:168] "Request Body" body=""
	I1210 06:31:14.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:14.378115  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:14.878072  401365 type.go:168] "Request Body" body=""
	I1210 06:31:14.878147  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:14.878405  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:15.378262  401365 type.go:168] "Request Body" body=""
	I1210 06:31:15.378345  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:15.378686  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:15.878492  401365 type.go:168] "Request Body" body=""
	I1210 06:31:15.878569  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:15.878935  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:16.378356  401365 type.go:168] "Request Body" body=""
	I1210 06:31:16.378441  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:16.378690  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:16.378731  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:16.878535  401365 type.go:168] "Request Body" body=""
	I1210 06:31:16.878609  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:16.878944  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:17.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:31:17.377753  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:17.378118  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:17.877723  401365 type.go:168] "Request Body" body=""
	I1210 06:31:17.877797  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:17.878157  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:18.377666  401365 type.go:168] "Request Body" body=""
	I1210 06:31:18.377744  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:18.378082  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:18.877660  401365 type.go:168] "Request Body" body=""
	I1210 06:31:18.877734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:18.878083  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:18.878141  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:19.378341  401365 type.go:168] "Request Body" body=""
	I1210 06:31:19.378417  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:19.378680  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:19.878385  401365 type.go:168] "Request Body" body=""
	I1210 06:31:19.878484  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:19.878844  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:20.377537  401365 type.go:168] "Request Body" body=""
	I1210 06:31:20.377620  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:20.377967  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:20.877662  401365 type.go:168] "Request Body" body=""
	I1210 06:31:20.877748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:20.878176  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:20.878224  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:21.377644  401365 type.go:168] "Request Body" body=""
	I1210 06:31:21.377723  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:21.378064  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:21.877799  401365 type.go:168] "Request Body" body=""
	I1210 06:31:21.877892  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:21.878256  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:22.377991  401365 type.go:168] "Request Body" body=""
	I1210 06:31:22.378069  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:22.378361  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:22.877671  401365 type.go:168] "Request Body" body=""
	I1210 06:31:22.877765  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:22.878106  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:23.377677  401365 type.go:168] "Request Body" body=""
	I1210 06:31:23.377772  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:23.378156  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:23.378228  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:23.877603  401365 type.go:168] "Request Body" body=""
	I1210 06:31:23.877676  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:23.877972  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:24.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:31:24.377753  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:24.378120  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:24.877983  401365 type.go:168] "Request Body" body=""
	I1210 06:31:24.878078  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:24.878441  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:25.378227  401365 type.go:168] "Request Body" body=""
	I1210 06:31:25.378296  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:25.378552  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:25.378598  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:25.878364  401365 type.go:168] "Request Body" body=""
	I1210 06:31:25.878444  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:25.878791  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:26.377537  401365 type.go:168] "Request Body" body=""
	I1210 06:31:26.377611  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:26.377959  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:26.878388  401365 type.go:168] "Request Body" body=""
	I1210 06:31:26.878456  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:26.878725  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:27.378513  401365 type.go:168] "Request Body" body=""
	I1210 06:31:27.378591  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:27.378938  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:27.378993  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:27.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:31:27.877739  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:27.878092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:28.378425  401365 type.go:168] "Request Body" body=""
	I1210 06:31:28.378506  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:28.378821  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:28.877546  401365 type.go:168] "Request Body" body=""
	I1210 06:31:28.877631  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:28.878002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:29.377725  401365 type.go:168] "Request Body" body=""
	I1210 06:31:29.377802  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:29.378176  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:29.878060  401365 type.go:168] "Request Body" body=""
	I1210 06:31:29.878133  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:29.878404  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:29.878448  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:30.378433  401365 type.go:168] "Request Body" body=""
	I1210 06:31:30.378508  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:30.378874  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:30.877621  401365 type.go:168] "Request Body" body=""
	I1210 06:31:30.877699  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:30.878072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:31.377633  401365 type.go:168] "Request Body" body=""
	I1210 06:31:31.377704  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:31.378026  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:31.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:31:31.877748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:31.878092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:32.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:31:32.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:32.378156  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:32.378215  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:32.878393  401365 type.go:168] "Request Body" body=""
	I1210 06:31:32.878461  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:32.878721  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:33.378508  401365 type.go:168] "Request Body" body=""
	I1210 06:31:33.378585  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:33.379111  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:33.877686  401365 type.go:168] "Request Body" body=""
	I1210 06:31:33.877775  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:33.878146  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:34.377671  401365 type.go:168] "Request Body" body=""
	I1210 06:31:34.377743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:34.378043  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:34.877949  401365 type.go:168] "Request Body" body=""
	I1210 06:31:34.878028  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:34.878374  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:34.878438  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:35.378226  401365 type.go:168] "Request Body" body=""
	I1210 06:31:35.378306  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:35.378649  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:35.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:31:35.878471  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:35.878748  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:36.378551  401365 type.go:168] "Request Body" body=""
	I1210 06:31:36.378631  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:36.378948  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:36.877548  401365 type.go:168] "Request Body" body=""
	I1210 06:31:36.877626  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:36.877995  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:37.378404  401365 type.go:168] "Request Body" body=""
	I1210 06:31:37.378472  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:37.378739  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:37.378783  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:37.878571  401365 type.go:168] "Request Body" body=""
	I1210 06:31:37.878646  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:37.878969  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:38.378416  401365 type.go:168] "Request Body" body=""
	I1210 06:31:38.378491  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:38.378834  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:38.878423  401365 type.go:168] "Request Body" body=""
	I1210 06:31:38.878499  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:38.878770  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:39.378611  401365 type.go:168] "Request Body" body=""
	I1210 06:31:39.378694  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:39.379044  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:39.379105  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:39.878018  401365 type.go:168] "Request Body" body=""
	I1210 06:31:39.878102  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:39.878461  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:40.378264  401365 type.go:168] "Request Body" body=""
	I1210 06:31:40.378348  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:40.378617  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:40.878427  401365 type.go:168] "Request Body" body=""
	I1210 06:31:40.878510  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:40.878851  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:41.377583  401365 type.go:168] "Request Body" body=""
	I1210 06:31:41.377658  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:41.377991  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:41.877560  401365 type.go:168] "Request Body" body=""
	I1210 06:31:41.877633  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:41.877903  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:41.877948  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:42.377649  401365 type.go:168] "Request Body" body=""
	I1210 06:31:42.377727  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:42.378093  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:42.877664  401365 type.go:168] "Request Body" body=""
	I1210 06:31:42.877739  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:42.878032  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:43.378436  401365 type.go:168] "Request Body" body=""
	I1210 06:31:43.378507  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:43.378831  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:43.878454  401365 type.go:168] "Request Body" body=""
	I1210 06:31:43.878546  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:43.878900  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:43.878962  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:44.378527  401365 type.go:168] "Request Body" body=""
	I1210 06:31:44.378608  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:44.378911  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:44.877852  401365 type.go:168] "Request Body" body=""
	I1210 06:31:44.877944  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:44.878230  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:45.377683  401365 type.go:168] "Request Body" body=""
	I1210 06:31:45.377757  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:45.378232  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:45.877964  401365 type.go:168] "Request Body" body=""
	I1210 06:31:45.878060  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:45.878412  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:46.378182  401365 type.go:168] "Request Body" body=""
	I1210 06:31:46.378267  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.378573  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:46.378621  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:46.878427  401365 type.go:168] "Request Body" body=""
	I1210 06:31:46.878510  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:46.878849  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.378554  401365 type.go:168] "Request Body" body=""
	I1210 06:31:47.378637  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.379010  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:47.878381  401365 type.go:168] "Request Body" body=""
	I1210 06:31:47.878453  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:47.878751  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:48.378580  401365 type.go:168] "Request Body" body=""
	I1210 06:31:48.378660  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.378984  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:48.379037  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:48.877565  401365 type.go:168] "Request Body" body=""
	I1210 06:31:48.877642  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:48.877972  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.378371  401365 type.go:168] "Request Body" body=""
	I1210 06:31:49.378448  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.378712  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:49.878386  401365 type.go:168] "Request Body" body=""
	I1210 06:31:49.878461  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:49.878790  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:50.377587  401365 type.go:168] "Request Body" body=""
	I1210 06:31:50.377673  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.378035  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:50.878395  401365 type.go:168] "Request Body" body=""
	I1210 06:31:50.878469  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:50.878754  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:50.878808  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:51.378548  401365 type.go:168] "Request Body" body=""
	I1210 06:31:51.378627  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.378976  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:51.877595  401365 type.go:168] "Request Body" body=""
	I1210 06:31:51.877677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:51.878064  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:52.378358  401365 type.go:168] "Request Body" body=""
	I1210 06:31:52.378433  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.378695  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:52.878474  401365 type.go:168] "Request Body" body=""
	I1210 06:31:52.878551  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:52.878895  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:52.878957  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:53.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:31:53.377721  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.378047  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:53.877607  401365 type.go:168] "Request Body" body=""
	I1210 06:31:53.877682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:53.878066  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:31:54.377759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.378069  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:54.877984  401365 type.go:168] "Request Body" body=""
	I1210 06:31:54.878068  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:54.878451  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:55.378227  401365 type.go:168] "Request Body" body=""
	I1210 06:31:55.378305  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.378567  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:55.378612  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:55.878449  401365 type.go:168] "Request Body" body=""
	I1210 06:31:55.878524  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:55.878878  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.377607  401365 type.go:168] "Request Body" body=""
	I1210 06:31:56.377684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.378031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:56.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:31:56.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:56.878731  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:57.378523  401365 type.go:168] "Request Body" body=""
	I1210 06:31:57.378605  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.378963  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:57.379024  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:31:57.878422  401365 type.go:168] "Request Body" body=""
	I1210 06:31:57.878496  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:57.878837  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:58.378369  401365 type.go:168] "Request Body" body=""
	I1210 06:31:58.378450  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.378724  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:58.878516  401365 type.go:168] "Request Body" body=""
	I1210 06:31:58.878590  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:58.878936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:31:59.377756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.378079  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:31:59.878003  401365 type.go:168] "Request Body" body=""
	I1210 06:31:59.878079  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:31:59.878346  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:31:59.878388  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:00.378620  401365 type.go:168] "Request Body" body=""
	I1210 06:32:00.378720  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.379187  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:00.877753  401365 type.go:168] "Request Body" body=""
	I1210 06:32:00.877830  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:00.878187  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.377623  401365 type.go:168] "Request Body" body=""
	I1210 06:32:01.377694  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.377960  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:01.877717  401365 type.go:168] "Request Body" body=""
	I1210 06:32:01.877791  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:01.878137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:02.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:32:02.377750  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.378099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:02.378152  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:02.878416  401365 type.go:168] "Request Body" body=""
	I1210 06:32:02.878493  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:02.878764  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.378615  401365 type.go:168] "Request Body" body=""
	I1210 06:32:03.378694  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.379016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:03.877719  401365 type.go:168] "Request Body" body=""
	I1210 06:32:03.877801  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:03.878168  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:04.377604  401365 type.go:168] "Request Body" body=""
	I1210 06:32:04.377682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.378022  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:04.878029  401365 type.go:168] "Request Body" body=""
	I1210 06:32:04.878113  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:04.878426  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:04.878477  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:05.378217  401365 type.go:168] "Request Body" body=""
	I1210 06:32:05.378293  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.378623  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:05.878242  401365 type.go:168] "Request Body" body=""
	I1210 06:32:05.878313  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:05.878586  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:06.378446  401365 type.go:168] "Request Body" body=""
	I1210 06:32:06.378528  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.378861  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:06.877578  401365 type.go:168] "Request Body" body=""
	I1210 06:32:06.877651  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:06.877991  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:07.378348  401365 type.go:168] "Request Body" body=""
	I1210 06:32:07.378430  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.378696  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:07.378747  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:07.878485  401365 type.go:168] "Request Body" body=""
	I1210 06:32:07.878566  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:07.878891  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.377683  401365 type.go:168] "Request Body" body=""
	I1210 06:32:08.377758  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.378068  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:08.877617  401365 type.go:168] "Request Body" body=""
	I1210 06:32:08.877686  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:08.877996  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:09.377667  401365 type.go:168] "Request Body" body=""
	I1210 06:32:09.377749  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.378070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:09.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:32:09.878526  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:09.878847  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:09.878895  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:10.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:32:10.377695  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.377992  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:10.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:32:10.877747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:10.878107  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:11.377752  401365 type.go:168] "Request Body" body=""
	I1210 06:32:11.377832  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.378194  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:11.878391  401365 type.go:168] "Request Body" body=""
	I1210 06:32:11.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:11.878721  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:12.378536  401365 type.go:168] "Request Body" body=""
	I1210 06:32:12.378609  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.379037  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:12.379094  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:12.877634  401365 type.go:168] "Request Body" body=""
	I1210 06:32:12.877718  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:12.878024  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:13.377615  401365 type.go:168] "Request Body" body=""
	I1210 06:32:13.377684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.377949  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:13.877636  401365 type.go:168] "Request Body" body=""
	I1210 06:32:13.877713  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:13.878091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.377642  401365 type.go:168] "Request Body" body=""
	I1210 06:32:14.377717  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.378074  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:14.877991  401365 type.go:168] "Request Body" body=""
	I1210 06:32:14.878073  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:14.878405  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:14.878468  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:15.378244  401365 type.go:168] "Request Body" body=""
	I1210 06:32:15.378316  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.378669  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:15.878506  401365 type.go:168] "Request Body" body=""
	I1210 06:32:15.878598  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:15.878952  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:16.378402  401365 type.go:168] "Request Body" body=""
	I1210 06:32:16.378473  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.378735  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:16.878581  401365 type.go:168] "Request Body" body=""
	I1210 06:32:16.878668  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:16.879029  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:16.879085  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:17.377664  401365 type.go:168] "Request Body" body=""
	I1210 06:32:17.377738  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.378065  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:17.877603  401365 type.go:168] "Request Body" body=""
	I1210 06:32:17.877677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:17.877943  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:18.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:32:18.377754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.378106  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:18.877827  401365 type.go:168] "Request Body" body=""
	I1210 06:32:18.877917  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:18.878299  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:19.377981  401365 type.go:168] "Request Body" body=""
	I1210 06:32:19.378062  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.378390  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:19.378451  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:19.878242  401365 type.go:168] "Request Body" body=""
	I1210 06:32:19.878318  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:19.878664  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.377555  401365 type.go:168] "Request Body" body=""
	I1210 06:32:20.377633  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.377966  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:20.877592  401365 type.go:168] "Request Body" body=""
	I1210 06:32:20.877663  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:20.878022  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:21.377596  401365 type.go:168] "Request Body" body=""
	I1210 06:32:21.377677  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.377971  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:21.877671  401365 type.go:168] "Request Body" body=""
	I1210 06:32:21.877747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:21.878078  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:21.878135  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:22.377603  401365 type.go:168] "Request Body" body=""
	I1210 06:32:22.377681  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.377998  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:22.877713  401365 type.go:168] "Request Body" body=""
	I1210 06:32:22.877789  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:22.878146  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.378586  401365 type.go:168] "Request Body" body=""
	I1210 06:32:23.378663  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.379023  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:23.877627  401365 type.go:168] "Request Body" body=""
	I1210 06:32:23.877698  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:23.878027  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:24.377671  401365 type.go:168] "Request Body" body=""
	I1210 06:32:24.377755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.378140  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:24.378210  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:24.878158  401365 type.go:168] "Request Body" body=""
	I1210 06:32:24.878240  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:24.878611  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:25.378254  401365 type.go:168] "Request Body" body=""
	I1210 06:32:25.378329  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.378601  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:25.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:32:25.878460  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:25.878767  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:26.378460  401365 type.go:168] "Request Body" body=""
	I1210 06:32:26.378534  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.378923  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:26.378977  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:26.878379  401365 type.go:168] "Request Body" body=""
	I1210 06:32:26.878453  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:26.878804  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:27.378593  401365 type.go:168] "Request Body" body=""
	I1210 06:32:27.378674  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.379034  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:27.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:32:27.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:27.878091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.378401  401365 type.go:168] "Request Body" body=""
	I1210 06:32:28.378470  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.378735  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:28.878509  401365 type.go:168] "Request Body" body=""
	I1210 06:32:28.878592  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:28.878904  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:28.878959  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:29.377676  401365 type.go:168] "Request Body" body=""
	I1210 06:32:29.377758  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.378099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:29.877932  401365 type.go:168] "Request Body" body=""
	I1210 06:32:29.878011  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:29.878331  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.378437  401365 type.go:168] "Request Body" body=""
	I1210 06:32:30.378520  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.378881  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:30.877601  401365 type.go:168] "Request Body" body=""
	I1210 06:32:30.877679  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:30.877997  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:31.378413  401365 type.go:168] "Request Body" body=""
	I1210 06:32:31.378485  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.378800  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:31.378859  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:31.877545  401365 type.go:168] "Request Body" body=""
	I1210 06:32:31.877620  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:31.877962  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.377685  401365 type.go:168] "Request Body" body=""
	I1210 06:32:32.377765  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.378115  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:32.878382  401365 type.go:168] "Request Body" body=""
	I1210 06:32:32.878458  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:32.878718  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:33.378533  401365 type.go:168] "Request Body" body=""
	I1210 06:32:33.378613  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.378973  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:33.379031  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:33.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:32:33.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:33.878099  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:34.377573  401365 type.go:168] "Request Body" body=""
	I1210 06:32:34.377644  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.377911  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:34.877902  401365 type.go:168] "Request Body" body=""
	I1210 06:32:34.877978  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:34.878339  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.378057  401365 type.go:168] "Request Body" body=""
	I1210 06:32:35.378143  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.378506  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:35.878224  401365 type.go:168] "Request Body" body=""
	I1210 06:32:35.878295  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:35.878562  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:35.878604  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:36.378404  401365 type.go:168] "Request Body" body=""
	I1210 06:32:36.378487  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.378840  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:36.877571  401365 type.go:168] "Request Body" body=""
	I1210 06:32:36.877653  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:36.877994  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.378346  401365 type.go:168] "Request Body" body=""
	I1210 06:32:37.378421  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.378684  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:37.878461  401365 type.go:168] "Request Body" body=""
	I1210 06:32:37.878543  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:37.878890  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:37.878952  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:38.378573  401365 type.go:168] "Request Body" body=""
	I1210 06:32:38.378654  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.378951  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:38.878358  401365 type.go:168] "Request Body" body=""
	I1210 06:32:38.878428  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:38.878691  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.378473  401365 type.go:168] "Request Body" body=""
	I1210 06:32:39.378552  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.378939  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:39.877654  401365 type.go:168] "Request Body" body=""
	I1210 06:32:39.877738  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:39.878074  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:40.377853  401365 type.go:168] "Request Body" body=""
	I1210 06:32:40.377926  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.378227  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:40.378275  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:40.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:32:40.877741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:40.878110  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.377673  401365 type.go:168] "Request Body" body=""
	I1210 06:32:41.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.378080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:41.878456  401365 type.go:168] "Request Body" body=""
	I1210 06:32:41.878528  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:41.878849  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:32:42.377701  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.378097  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:42.877683  401365 type.go:168] "Request Body" body=""
	I1210 06:32:42.877759  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:42.878128  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:42.878186  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:43.378375  401365 type.go:168] "Request Body" body=""
	I1210 06:32:43.378451  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.378720  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:43.878495  401365 type.go:168] "Request Body" body=""
	I1210 06:32:43.878566  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:43.878911  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.378610  401365 type.go:168] "Request Body" body=""
	I1210 06:32:44.378700  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.379090  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:44.877962  401365 type.go:168] "Request Body" body=""
	I1210 06:32:44.878030  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:44.878300  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:44.878343  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:45.377682  401365 type.go:168] "Request Body" body=""
	I1210 06:32:45.377763  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.378114  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:45.877818  401365 type.go:168] "Request Body" body=""
	I1210 06:32:45.877892  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:45.878234  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.377583  401365 type.go:168] "Request Body" body=""
	I1210 06:32:46.377660  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.377917  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:46.877665  401365 type.go:168] "Request Body" body=""
	I1210 06:32:46.877751  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:46.878148  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:47.377793  401365 type.go:168] "Request Body" body=""
	I1210 06:32:47.377870  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.378225  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:47.378277  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:47.877609  401365 type.go:168] "Request Body" body=""
	I1210 06:32:47.877689  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:47.877999  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:48.377617  401365 type.go:168] "Request Body" body=""
	I1210 06:32:48.377714  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.378121  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:48.877709  401365 type.go:168] "Request Body" body=""
	I1210 06:32:48.877795  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:48.878141  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.377627  401365 type.go:168] "Request Body" body=""
	I1210 06:32:49.377713  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.378005  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:49.878006  401365 type.go:168] "Request Body" body=""
	I1210 06:32:49.878085  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:49.878433  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:49.878488  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:50.378322  401365 type.go:168] "Request Body" body=""
	I1210 06:32:50.378398  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.378718  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:50.878347  401365 type.go:168] "Request Body" body=""
	I1210 06:32:50.878420  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:50.878687  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.378558  401365 type.go:168] "Request Body" body=""
	I1210 06:32:51.378630  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.378973  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:51.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:32:51.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:51.878061  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:52.377607  401365 type.go:168] "Request Body" body=""
	I1210 06:32:52.377682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.377965  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:52.378014  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:52.877660  401365 type.go:168] "Request Body" body=""
	I1210 06:32:52.877736  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:52.878070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:53.377675  401365 type.go:168] "Request Body" body=""
	I1210 06:32:53.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.378128  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:53.878388  401365 type.go:168] "Request Body" body=""
	I1210 06:32:53.878462  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:53.878725  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:54.378466  401365 type.go:168] "Request Body" body=""
	I1210 06:32:54.378536  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.378857  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:54.378913  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:54.877680  401365 type.go:168] "Request Body" body=""
	I1210 06:32:54.877756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:54.878119  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:55.378458  401365 type.go:168] "Request Body" body=""
	I1210 06:32:55.378526  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.378782  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:55.878544  401365 type.go:168] "Request Body" body=""
	I1210 06:32:55.878626  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:55.878951  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.377665  401365 type.go:168] "Request Body" body=""
	I1210 06:32:56.377741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.378096  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:56.878361  401365 type.go:168] "Request Body" body=""
	I1210 06:32:56.878436  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:56.878736  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:56.878785  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:57.377545  401365 type.go:168] "Request Body" body=""
	I1210 06:32:57.377621  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.377956  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:57.877652  401365 type.go:168] "Request Body" body=""
	I1210 06:32:57.877743  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:57.878070  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.377628  401365 type.go:168] "Request Body" body=""
	I1210 06:32:58.377706  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.378022  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:58.877657  401365 type.go:168] "Request Body" body=""
	I1210 06:32:58.877735  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:58.878058  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:32:59.377658  401365 type.go:168] "Request Body" body=""
	I1210 06:32:59.377748  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.378092  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:32:59.378152  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:32:59.877990  401365 type.go:168] "Request Body" body=""
	I1210 06:32:59.878106  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:32:59.878540  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:00.378642  401365 type.go:168] "Request Body" body=""
	I1210 06:33:00.378734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.379157  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:00.877676  401365 type.go:168] "Request Body" body=""
	I1210 06:33:00.877756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:00.878108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:01.377615  401365 type.go:168] "Request Body" body=""
	I1210 06:33:01.377687  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.377982  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:01.877579  401365 type.go:168] "Request Body" body=""
	I1210 06:33:01.877659  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:01.877980  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:01.878035  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:02.377679  401365 type.go:168] "Request Body" body=""
	I1210 06:33:02.377769  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.378112  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:02.877622  401365 type.go:168] "Request Body" body=""
	I1210 06:33:02.877696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:02.878019  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.378424  401365 type.go:168] "Request Body" body=""
	I1210 06:33:03.378503  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.378851  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:03.877594  401365 type.go:168] "Request Body" body=""
	I1210 06:33:03.877673  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:03.878031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:03.878095  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:04.377623  401365 type.go:168] "Request Body" body=""
	I1210 06:33:04.377695  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.378016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:04.878008  401365 type.go:168] "Request Body" body=""
	I1210 06:33:04.878082  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:04.878402  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.378189  401365 type.go:168] "Request Body" body=""
	I1210 06:33:05.378264  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.378599  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:05.878376  401365 type.go:168] "Request Body" body=""
	I1210 06:33:05.878455  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:05.878734  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:05.878779  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:06.378572  401365 type.go:168] "Request Body" body=""
	I1210 06:33:06.378660  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.379002  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:06.877676  401365 type.go:168] "Request Body" body=""
	I1210 06:33:06.877754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:06.878110  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.378442  401365 type.go:168] "Request Body" body=""
	I1210 06:33:07.378521  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.378800  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:07.877549  401365 type.go:168] "Request Body" body=""
	I1210 06:33:07.877629  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:07.878000  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:08.377709  401365 type.go:168] "Request Body" body=""
	I1210 06:33:08.377785  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.378149  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:08.378206  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:08.877866  401365 type.go:168] "Request Body" body=""
	I1210 06:33:08.877938  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:08.878266  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.377997  401365 type.go:168] "Request Body" body=""
	I1210 06:33:09.378074  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.378430  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:09.878278  401365 type.go:168] "Request Body" body=""
	I1210 06:33:09.878362  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:09.878709  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:10.378535  401365 type.go:168] "Request Body" body=""
	I1210 06:33:10.378614  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.378892  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:10.378949  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:10.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:10.877696  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:10.878045  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:11.377636  401365 type.go:168] "Request Body" body=""
	I1210 06:33:11.377715  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.378069  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:11.878387  401365 type.go:168] "Request Body" body=""
	I1210 06:33:11.878459  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:11.878741  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:12.378537  401365 type.go:168] "Request Body" body=""
	I1210 06:33:12.378621  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.378959  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:12.379018  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:12.877685  401365 type.go:168] "Request Body" body=""
	I1210 06:33:12.877767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:12.878108  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:13.377595  401365 type.go:168] "Request Body" body=""
	I1210 06:33:13.377667  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.377991  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:13.877696  401365 type.go:168] "Request Body" body=""
	I1210 06:33:13.877788  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:13.878233  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.377670  401365 type.go:168] "Request Body" body=""
	I1210 06:33:14.377745  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.378062  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:14.878087  401365 type.go:168] "Request Body" body=""
	I1210 06:33:14.878167  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:14.878437  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:14.878481  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:15.378338  401365 type.go:168] "Request Body" body=""
	I1210 06:33:15.378427  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.378799  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:15.877556  401365 type.go:168] "Request Body" body=""
	I1210 06:33:15.877630  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:15.878034  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:16.378366  401365 type.go:168] "Request Body" body=""
	I1210 06:33:16.378435  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.378773  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:16.878569  401365 type.go:168] "Request Body" body=""
	I1210 06:33:16.878643  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:16.879012  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:16.879074  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:17.377672  401365 type.go:168] "Request Body" body=""
	I1210 06:33:17.377747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.378122  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:17.877820  401365 type.go:168] "Request Body" body=""
	I1210 06:33:17.877897  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:17.878175  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:18.377654  401365 type.go:168] "Request Body" body=""
	I1210 06:33:18.377727  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.378073  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:18.877668  401365 type.go:168] "Request Body" body=""
	I1210 06:33:18.877754  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:18.878137  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:19.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:33:19.377682  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.377977  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:19.378029  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:19.877848  401365 type.go:168] "Request Body" body=""
	I1210 06:33:19.877930  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:19.878248  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:20.378064  401365 type.go:168] "Request Body" body=""
	I1210 06:33:20.378150  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.378561  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:20.878476  401365 type.go:168] "Request Body" body=""
	I1210 06:33:20.878552  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:20.878835  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:21.377583  401365 type.go:168] "Request Body" body=""
	I1210 06:33:21.377658  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.378029  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:21.378094  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:21.877680  401365 type.go:168] "Request Body" body=""
	I1210 06:33:21.877755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:21.878122  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:22.378420  401365 type.go:168] "Request Body" body=""
	I1210 06:33:22.378487  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.378808  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:22.877547  401365 type.go:168] "Request Body" body=""
	I1210 06:33:22.877625  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:22.877980  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:23.377731  401365 type.go:168] "Request Body" body=""
	I1210 06:33:23.377812  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.378164  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:23.378221  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:23.877756  401365 type.go:168] "Request Body" body=""
	I1210 06:33:23.877825  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:23.878080  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:24.377759  401365 type.go:168] "Request Body" body=""
	I1210 06:33:24.377846  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.378207  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:24.878036  401365 type.go:168] "Request Body" body=""
	I1210 06:33:24.878119  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:24.878474  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:25.378280  401365 type.go:168] "Request Body" body=""
	I1210 06:33:25.378375  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.378683  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:25.378744  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:25.878089  401365 type.go:168] "Request Body" body=""
	I1210 06:33:25.878190  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:25.878571  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:26.378247  401365 type.go:168] "Request Body" body=""
	I1210 06:33:26.378325  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.378653  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:26.878389  401365 type.go:168] "Request Body" body=""
	I1210 06:33:26.878457  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:26.878720  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:27.378526  401365 type.go:168] "Request Body" body=""
	I1210 06:33:27.378607  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.378943  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:27.379002  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:27.877690  401365 type.go:168] "Request Body" body=""
	I1210 06:33:27.877775  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:27.878121  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:28.377561  401365 type.go:168] "Request Body" body=""
	I1210 06:33:28.377635  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.377959  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:28.877670  401365 type.go:168] "Request Body" body=""
	I1210 06:33:28.877750  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:28.878089  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.378437  401365 type.go:168] "Request Body" body=""
	I1210 06:33:29.378518  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.378867  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:29.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:29.877685  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:29.877995  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:29.878058  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:30.377631  401365 type.go:168] "Request Body" body=""
	I1210 06:33:30.377707  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.378042  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:30.877750  401365 type.go:168] "Request Body" body=""
	I1210 06:33:30.877827  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:30.878132  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:31.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:33:31.377692  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.377951  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:31.877635  401365 type.go:168] "Request Body" body=""
	I1210 06:33:31.877717  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:31.878049  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:31.878116  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:32.377678  401365 type.go:168] "Request Body" body=""
	I1210 06:33:32.377760  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.378103  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:32.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:32.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:32.877995  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:33.377756  401365 type.go:168] "Request Body" body=""
	I1210 06:33:33.377840  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.378198  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:33.877915  401365 type.go:168] "Request Body" body=""
	I1210 06:33:33.877993  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:33.878332  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:33.878392  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:34.377635  401365 type.go:168] "Request Body" body=""
	I1210 06:33:34.377727  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.378085  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:34.878096  401365 type.go:168] "Request Body" body=""
	I1210 06:33:34.878177  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:34.878550  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:35.378207  401365 type.go:168] "Request Body" body=""
	I1210 06:33:35.378280  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.378622  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:35.878407  401365 type.go:168] "Request Body" body=""
	I1210 06:33:35.878481  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:35.878777  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:35.878833  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:36.378544  401365 type.go:168] "Request Body" body=""
	I1210 06:33:36.378618  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.378979  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:36.877667  401365 type.go:168] "Request Body" body=""
	I1210 06:33:36.877742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:36.878064  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.377603  401365 type.go:168] "Request Body" body=""
	I1210 06:33:37.377674  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.377946  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:37.877690  401365 type.go:168] "Request Body" body=""
	I1210 06:33:37.877771  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:37.878181  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:38.377888  401365 type.go:168] "Request Body" body=""
	I1210 06:33:38.377973  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.378298  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:38.378347  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:38.877651  401365 type.go:168] "Request Body" body=""
	I1210 06:33:38.877756  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:38.878041  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.377680  401365 type.go:168] "Request Body" body=""
	I1210 06:33:39.377755  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.378094  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:39.877930  401365 type.go:168] "Request Body" body=""
	I1210 06:33:39.878008  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:39.878344  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:40.378300  401365 type.go:168] "Request Body" body=""
	I1210 06:33:40.378366  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.378615  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:40.378657  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:40.878469  401365 type.go:168] "Request Body" body=""
	I1210 06:33:40.878546  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:40.878897  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:41.378609  401365 type.go:168] "Request Body" body=""
	I1210 06:33:41.378684  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.379020  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:41.877612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:41.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:41.878041  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:42.377673  401365 type.go:168] "Request Body" body=""
	I1210 06:33:42.377747  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.378116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:42.877854  401365 type.go:168] "Request Body" body=""
	I1210 06:33:42.877940  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:42.878285  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:42.878351  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:43.377678  401365 type.go:168] "Request Body" body=""
	I1210 06:33:43.377746  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.378043  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:43.877664  401365 type.go:168] "Request Body" body=""
	I1210 06:33:43.877741  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:43.878068  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:44.377646  401365 type.go:168] "Request Body" body=""
	I1210 06:33:44.377729  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.378109  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:44.877931  401365 type.go:168] "Request Body" body=""
	I1210 06:33:44.878000  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:44.878273  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:45.377683  401365 type.go:168] "Request Body" body=""
	I1210 06:33:45.377768  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.378162  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:45.378230  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:45.877646  401365 type.go:168] "Request Body" body=""
	I1210 06:33:45.877726  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:45.878079  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:46.378365  401365 type.go:168] "Request Body" body=""
	I1210 06:33:46.378443  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.378778  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:46.878592  401365 type.go:168] "Request Body" body=""
	I1210 06:33:46.878667  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:46.879016  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.377612  401365 type.go:168] "Request Body" body=""
	I1210 06:33:47.377693  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.378037  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:47.878404  401365 type.go:168] "Request Body" body=""
	I1210 06:33:47.878481  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:47.878791  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:47.878833  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:48.378603  401365 type.go:168] "Request Body" body=""
	I1210 06:33:48.378679  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.379038  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:48.877634  401365 type.go:168] "Request Body" body=""
	I1210 06:33:48.877710  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:48.878058  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:49.377585  401365 type.go:168] "Request Body" body=""
	I1210 06:33:49.377661  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.377929  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:49.877952  401365 type.go:168] "Request Body" body=""
	I1210 06:33:49.878030  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:49.878370  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:50.378433  401365 type.go:168] "Request Body" body=""
	I1210 06:33:50.378512  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.378851  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:50.378908  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:50.878409  401365 type.go:168] "Request Body" body=""
	I1210 06:33:50.878484  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:50.878745  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:51.378528  401365 type.go:168] "Request Body" body=""
	I1210 06:33:51.378608  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.378930  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:51.877680  401365 type.go:168] "Request Body" body=""
	I1210 06:33:51.877772  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:51.878121  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.377647  401365 type.go:168] "Request Body" body=""
	I1210 06:33:52.377767  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.378042  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:52.877736  401365 type.go:168] "Request Body" body=""
	I1210 06:33:52.877859  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:52.878200  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:52.878263  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:53.377750  401365 type.go:168] "Request Body" body=""
	I1210 06:33:53.377829  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.378164  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:53.878375  401365 type.go:168] "Request Body" body=""
	I1210 06:33:53.878447  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:53.878711  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.378552  401365 type.go:168] "Request Body" body=""
	I1210 06:33:54.378627  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.378978  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:54.877937  401365 type.go:168] "Request Body" body=""
	I1210 06:33:54.878016  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:54.878372  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:54.878426  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:55.377557  401365 type.go:168] "Request Body" body=""
	I1210 06:33:55.377627  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.377890  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:55.877581  401365 type.go:168] "Request Body" body=""
	I1210 06:33:55.877661  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:55.878044  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:56.377606  401365 type.go:168] "Request Body" body=""
	I1210 06:33:56.377687  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.378031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:56.878382  401365 type.go:168] "Request Body" body=""
	I1210 06:33:56.878463  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:56.878747  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:56.878792  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:57.378563  401365 type.go:168] "Request Body" body=""
	I1210 06:33:57.378655  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.379048  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:57.878429  401365 type.go:168] "Request Body" body=""
	I1210 06:33:57.878505  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:57.878838  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:58.378383  401365 type.go:168] "Request Body" body=""
	I1210 06:33:58.378457  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.378729  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:58.878537  401365 type.go:168] "Request Body" body=""
	I1210 06:33:58.878615  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:58.878961  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:33:58.879020  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:33:59.377698  401365 type.go:168] "Request Body" body=""
	I1210 06:33:59.377777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.378091  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:33:59.877943  401365 type.go:168] "Request Body" body=""
	I1210 06:33:59.878015  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:33:59.878285  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:00.388459  401365 type.go:168] "Request Body" body=""
	I1210 06:34:00.388551  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.388936  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:00.877633  401365 type.go:168] "Request Body" body=""
	I1210 06:34:00.877711  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:00.878072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:01.377618  401365 type.go:168] "Request Body" body=""
	I1210 06:34:01.377692  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.377964  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:01.378006  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:01.877703  401365 type.go:168] "Request Body" body=""
	I1210 06:34:01.877777  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:01.878116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.377805  401365 type.go:168] "Request Body" body=""
	I1210 06:34:02.377886  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.378243  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:02.877793  401365 type.go:168] "Request Body" body=""
	I1210 06:34:02.877861  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:02.878116  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:03.377724  401365 type.go:168] "Request Body" body=""
	I1210 06:34:03.377819  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.378176  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:03.378248  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:03.877926  401365 type.go:168] "Request Body" body=""
	I1210 06:34:03.877998  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:03.878340  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.378166  401365 type.go:168] "Request Body" body=""
	I1210 06:34:04.378243  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.378539  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:04.878398  401365 type.go:168] "Request Body" body=""
	I1210 06:34:04.878479  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:04.878809  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:05.378551  401365 type.go:168] "Request Body" body=""
	I1210 06:34:05.378630  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.379127  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:05.379181  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:05.877595  401365 type.go:168] "Request Body" body=""
	I1210 06:34:05.877669  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:05.877928  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.377667  401365 type.go:168] "Request Body" body=""
	I1210 06:34:06.377742  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.378094  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:06.877685  401365 type.go:168] "Request Body" body=""
	I1210 06:34:06.877764  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:06.878112  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.378383  401365 type.go:168] "Request Body" body=""
	I1210 06:34:07.378451  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.378722  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:07.878478  401365 type.go:168] "Request Body" body=""
	I1210 06:34:07.878553  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:07.878918  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:07.878972  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:08.378592  401365 type.go:168] "Request Body" body=""
	I1210 06:34:08.378675  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.379031  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:08.877609  401365 type.go:168] "Request Body" body=""
	I1210 06:34:08.877688  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:08.877968  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.377659  401365 type.go:168] "Request Body" body=""
	I1210 06:34:09.377734  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.378072  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:09.877922  401365 type.go:168] "Request Body" body=""
	I1210 06:34:09.878005  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:09.878441  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:10.378514  401365 type.go:168] "Request Body" body=""
	I1210 06:34:10.378590  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.378890  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 06:34:10.378934  401365 node_ready.go:55] error getting node "functional-253997" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-253997": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 06:34:10.877619  401365 type.go:168] "Request Body" body=""
	I1210 06:34:10.877709  401365 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-253997" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 06:34:10.878026  401365 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 06:34:11.377616  401365 type.go:168] "Request Body" body=""
	I1210 06:34:11.377679  401365 node_ready.go:38] duration metric: took 6m0.000247895s for node "functional-253997" to be "Ready" ...
	I1210 06:34:11.380832  401365 out.go:203] 
	W1210 06:34:11.383623  401365 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 06:34:11.383641  401365 out.go:285] * 
	W1210 06:34:11.385783  401365 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:34:11.388549  401365 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 06:34:20 functional-253997 crio[6019]: time="2025-12-10T06:34:20.226234681Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a708ff60-bcea-483c-b679-ca4b4043100c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.320917933Z" level=info msg="Checking image status: minikube-local-cache-test:functional-253997" id=8cf16b14-ad6c-4516-86f9-efc2d004c46d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.32123257Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.321307328Z" level=info msg="Image minikube-local-cache-test:functional-253997 not found" id=8cf16b14-ad6c-4516-86f9-efc2d004c46d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.321416589Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-253997 found" id=8cf16b14-ad6c-4516-86f9-efc2d004c46d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.35050104Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-253997" id=723203b4-8d87-467a-93f7-8a39c29b88e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.350679963Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-253997 not found" id=723203b4-8d87-467a-93f7-8a39c29b88e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.350723106Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-253997 found" id=723203b4-8d87-467a-93f7-8a39c29b88e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.380319293Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-253997" id=a6b043ee-ff17-4b7e-a039-3a7f7e9eb2ef name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.380461883Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-253997 not found" id=a6b043ee-ff17-4b7e-a039-3a7f7e9eb2ef name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:21 functional-253997 crio[6019]: time="2025-12-10T06:34:21.38050327Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-253997 found" id=a6b043ee-ff17-4b7e-a039-3a7f7e9eb2ef name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:22 functional-253997 crio[6019]: time="2025-12-10T06:34:22.426976539Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=67fa22eb-6c70-41dd-bbb9-9c421c692d3d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:22 functional-253997 crio[6019]: time="2025-12-10T06:34:22.752414901Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1c569856-c3e8-4856-8110-7045b84de2ec name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:22 functional-253997 crio[6019]: time="2025-12-10T06:34:22.752561069Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=1c569856-c3e8-4856-8110-7045b84de2ec name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:22 functional-253997 crio[6019]: time="2025-12-10T06:34:22.752598329Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=1c569856-c3e8-4856-8110-7045b84de2ec name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.316751862Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=cc2e7227-fcb5-4e0b-b647-12314d70d789 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.316887905Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=cc2e7227-fcb5-4e0b-b647-12314d70d789 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.316926034Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=cc2e7227-fcb5-4e0b-b647-12314d70d789 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.360744336Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=0ea1486a-0d89-4a21-929d-c4be9efea554 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.360877589Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=0ea1486a-0d89-4a21-929d-c4be9efea554 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.360926697Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=0ea1486a-0d89-4a21-929d-c4be9efea554 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.388284672Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5c436c91-3a6e-45fc-b84a-f452aff3ecbe name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.388443919Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=5c436c91-3a6e-45fc-b84a-f452aff3ecbe name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.388496293Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=5c436c91-3a6e-45fc-b84a-f452aff3ecbe name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:34:23 functional-253997 crio[6019]: time="2025-12-10T06:34:23.911412131Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f800cde8-651a-4555-a685-1c738a5e3283 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:34:27.966745   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:27.967391   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:27.969006   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:27.969593   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:34:27.971370   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 03:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015165] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514331] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032961] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807794] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.310854] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 04:51] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000001f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000bd7dad86{9P.session} n=00000000db0bc116
	[  +0.001142] FS-Cache: O-key=[10] '34323936323939323530'
	[  +0.000773] FS-Cache: N-cookie c=00000020 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000bd7dad86{9P.session} n=00000000040fb2e3
	[  +0.001079] FS-Cache: N-key=[10] '34323936323939323530'
	[Dec10 04:56] hrtimer: interrupt took 8286122 ns
	[Dec10 06:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 06:10] overlayfs: idmapped layers are currently not supported
	[ +21.611451] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 06:15] overlayfs: idmapped layers are currently not supported
	[Dec10 06:16] overlayfs: idmapped layers are currently not supported
	[Dec10 06:19] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:34:28 up  3:16,  0 user,  load average: 0.34, 0.30, 0.81
	Linux functional-253997 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:34:25 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:34:26 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1158.
	Dec 10 06:34:26 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:26 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:26 functional-253997 kubelet[10034]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:26 functional-253997 kubelet[10034]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:26 functional-253997 kubelet[10034]: E1210 06:34:26.199665   10034 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:34:26 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:34:26 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:34:26 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1159.
	Dec 10 06:34:26 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:26 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:26 functional-253997 kubelet[10056]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:26 functional-253997 kubelet[10056]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:26 functional-253997 kubelet[10056]: E1210 06:34:26.926971   10056 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:34:26 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:34:26 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:34:27 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1160.
	Dec 10 06:34:27 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:27 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:34:27 functional-253997 kubelet[10082]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:27 functional-253997 kubelet[10082]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:34:27 functional-253997 kubelet[10082]: E1210 06:34:27.694169   10082 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:34:27 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:34:27 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997: exit status 2 (359.067921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-253997" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (2.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (737.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-253997 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1210 06:36:58.799042  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:38:38.176277  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:40:01.245437  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:41:58.799188  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:43:38.183464  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-253997 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m14.923401151s)

                                                
                                                
-- stdout --
	* [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-253997" primary control-plane node in "functional-253997" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000253721s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-253997 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m14.924708907s for "functional-253997" cluster.
I1210 06:46:44.053965  364265 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-253997
helpers_test.go:244: (dbg) docker inspect functional-253997:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	        "Created": "2025-12-10T06:19:33.832297734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 395175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:19:33.914699563Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hosts",
	        "LogPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7-json.log",
	        "Name": "/functional-253997",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-253997:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-253997",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	                "LowerDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-253997",
	                "Source": "/var/lib/docker/volumes/functional-253997/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-253997",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-253997",
	                "name.minikube.sigs.k8s.io": "functional-253997",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "484c42edbd556e17e2c5a836571d3275bc9cbd157b70c100e08a5b73322a62e6",
	            "SandboxKey": "/var/run/docker/netns/484c42edbd55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-253997": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ba:52:c5:6e:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fc5aadc020d5981cfdab05726de722ae81d653cbc30b66014568069657370fe7",
	                    "EndpointID": "9ecd4882ea5c9fe5581b68ad15a0a2f450e34777aee426fcc158008c81076e5a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-253997",
	                        "256e059cdf1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997: exit status 2 (323.458029ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-253997 logs -n 25: (1.073461409s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-013831 image build -t localhost/my-image:functional-013831 testdata/build --alsologtostderr                                          │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls --format table --alsologtostderr                                                                                     │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls                                                                                                                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ delete         │ -p functional-013831                                                                                                                            │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start          │ -p functional-253997 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ start          │ -p functional-253997 --alsologtostderr -v=8                                                                                                     │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:28 UTC │                     │
	│ cache          │ functional-253997 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache add registry.k8s.io/pause:latest                                                                                        │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache add minikube-local-cache-test:functional-253997                                                                         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache delete minikube-local-cache-test:functional-253997                                                                      │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl images                                                                                                        │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │                     │
	│ cache          │ functional-253997 cache reload                                                                                                                  │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ kubectl        │ functional-253997 kubectl -- --context functional-253997 get pods                                                                               │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │                     │
	│ start          │ -p functional-253997 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                        │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:34:29
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:34:29.186876  407330 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:34:29.187053  407330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:34:29.187058  407330 out.go:374] Setting ErrFile to fd 2...
	I1210 06:34:29.187062  407330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:34:29.187341  407330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:34:29.187713  407330 out.go:368] Setting JSON to false
	I1210 06:34:29.188576  407330 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11822,"bootTime":1765336648,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:34:29.188634  407330 start.go:143] virtualization:  
	I1210 06:34:29.192149  407330 out.go:179] * [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:34:29.195073  407330 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:34:29.195162  407330 notify.go:221] Checking for updates...
	I1210 06:34:29.200831  407330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:34:29.203909  407330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:34:29.206776  407330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:34:29.209617  407330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:34:29.212440  407330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:34:29.215839  407330 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:34:29.215937  407330 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:34:29.239404  407330 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:34:29.239516  407330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:34:29.302303  407330 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:34:29.292878865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:34:29.302405  407330 docker.go:319] overlay module found
	I1210 06:34:29.305588  407330 out.go:179] * Using the docker driver based on existing profile
	I1210 06:34:29.308369  407330 start.go:309] selected driver: docker
	I1210 06:34:29.308379  407330 start.go:927] validating driver "docker" against &{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:34:29.308484  407330 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:34:29.308590  407330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:34:29.367055  407330 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:34:29.35802689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:34:29.367451  407330 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:34:29.367476  407330 cni.go:84] Creating CNI manager for ""
	I1210 06:34:29.367527  407330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:34:29.367575  407330 start.go:353] cluster config:
	{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:34:29.370834  407330 out.go:179] * Starting "functional-253997" primary control-plane node in "functional-253997" cluster
	I1210 06:34:29.373779  407330 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:34:29.376601  407330 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:34:29.379406  407330 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:34:29.379504  407330 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:34:29.398798  407330 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:34:29.398809  407330 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:34:29.439425  407330 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 06:34:29.641198  407330 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 06:34:29.641344  407330 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/config.json ...
	I1210 06:34:29.641548  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:29.641601  407330 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:34:29.641630  407330 start.go:360] acquireMachinesLock for functional-253997: {Name:mkd4a204596bf14d7530a6c5c103756527acdf26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:29.641675  407330 start.go:364] duration metric: took 26.355µs to acquireMachinesLock for "functional-253997"
	I1210 06:34:29.641688  407330 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:34:29.641692  407330 fix.go:54] fixHost starting: 
	I1210 06:34:29.641950  407330 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:34:29.660018  407330 fix.go:112] recreateIfNeeded on functional-253997: state=Running err=<nil>
	W1210 06:34:29.660039  407330 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:34:29.663260  407330 out.go:252] * Updating the running docker "functional-253997" container ...
	I1210 06:34:29.663287  407330 machine.go:94] provisionDockerMachine start ...
	I1210 06:34:29.663366  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:29.683378  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:29.683692  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:29.683698  407330 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:34:29.821832  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:29.837224  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:34:29.837239  407330 ubuntu.go:182] provisioning hostname "functional-253997"
	I1210 06:34:29.837320  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:29.868971  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:29.869301  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:29.869310  407330 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-253997 && echo "functional-253997" | sudo tee /etc/hostname
	I1210 06:34:29.986840  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:30.112009  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:34:30.112104  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:30.132596  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:30.132908  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:30.132923  407330 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-253997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-253997/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-253997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:34:30.208840  407330 cache.go:107] acquiring lock: {Name:mk9268471d41d450748d4b0133d1a043378776f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.208835  407330 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.208914  407330 cache.go:107] acquiring lock: {Name:mk8250dc655f821bf2674ed77a7683798ede4f4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.208957  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:34:30.208967  407330 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 138.989µs
	I1210 06:34:30.208975  407330 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:34:30.208986  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:34:30.209001  407330 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 97.733µs
	I1210 06:34:30.208999  407330 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209007  407330 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:34:30.209031  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:34:30.209036  407330 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.599µs
	I1210 06:34:30.209024  407330 cache.go:107] acquiring lock: {Name:mk1c0334e51772e4f6b3429cc05b91ec19d54b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209041  407330 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:34:30.209051  407330 cache.go:107] acquiring lock: {Name:mkc2831727d74e5fadb1c00bd89f0236b385200e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209067  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:34:30.209072  407330 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 53.268µs
	I1210 06:34:30.209089  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:34:30.209088  407330 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:34:30.209095  407330 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 45.753µs
	I1210 06:34:30.209100  407330 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:34:30.209108  407330 cache.go:107] acquiring lock: {Name:mk7e052900e41419dc78d3311f27db508a8f64b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209102  407330 cache.go:107] acquiring lock: {Name:mk41aaba55a3296a937cf380d44dc9920df69546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209134  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:34:30.209138  407330 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 31.27µs
	I1210 06:34:30.209143  407330 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:34:30.209145  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:34:30.209151  407330 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 50.536µs
	I1210 06:34:30.209155  407330 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:34:30.209160  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:34:30.209163  407330 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 351.676µs
	I1210 06:34:30.209168  407330 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:34:30.209180  407330 cache.go:87] Successfully saved all images to host disk.
	I1210 06:34:30.290041  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:34:30.290057  407330 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-362392/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-362392/.minikube}
	I1210 06:34:30.290077  407330 ubuntu.go:190] setting up certificates
	I1210 06:34:30.290086  407330 provision.go:84] configureAuth start
	I1210 06:34:30.290163  407330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:34:30.308042  407330 provision.go:143] copyHostCerts
	I1210 06:34:30.308132  407330 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem, removing ...
	I1210 06:34:30.308140  407330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 06:34:30.308215  407330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem (1078 bytes)
	I1210 06:34:30.308356  407330 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem, removing ...
	I1210 06:34:30.308366  407330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 06:34:30.308393  407330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem (1123 bytes)
	I1210 06:34:30.308451  407330 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem, removing ...
	I1210 06:34:30.308454  407330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 06:34:30.308477  407330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem (1675 bytes)
	I1210 06:34:30.308526  407330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem org=jenkins.functional-253997 san=[127.0.0.1 192.168.49.2 functional-253997 localhost minikube]
	I1210 06:34:30.594902  407330 provision.go:177] copyRemoteCerts
	I1210 06:34:30.594965  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:34:30.595003  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:30.611740  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:30.721082  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:34:30.738821  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:34:30.756666  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:34:30.774292  407330 provision.go:87] duration metric: took 484.176925ms to configureAuth
	I1210 06:34:30.774310  407330 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:34:30.774512  407330 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:34:30.774629  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:30.792842  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:30.793168  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:30.793179  407330 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:34:31.164456  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:34:31.164470  407330 machine.go:97] duration metric: took 1.501175708s to provisionDockerMachine
	I1210 06:34:31.164497  407330 start.go:293] postStartSetup for "functional-253997" (driver="docker")
	I1210 06:34:31.164510  407330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:34:31.164571  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:34:31.164607  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.185147  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.293395  407330 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:34:31.296969  407330 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:34:31.296987  407330 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:34:31.296998  407330 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/addons for local assets ...
	I1210 06:34:31.297053  407330 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/files for local assets ...
	I1210 06:34:31.297133  407330 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> 3642652.pem in /etc/ssl/certs
	I1210 06:34:31.297238  407330 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts -> hosts in /etc/test/nested/copy/364265
	I1210 06:34:31.297285  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364265
	I1210 06:34:31.305181  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:34:31.324368  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts --> /etc/test/nested/copy/364265/hosts (40 bytes)
	I1210 06:34:31.342686  407330 start.go:296] duration metric: took 178.173087ms for postStartSetup
	I1210 06:34:31.342778  407330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:34:31.342817  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.360907  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.462708  407330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:34:31.467744  407330 fix.go:56] duration metric: took 1.826044535s for fixHost
	I1210 06:34:31.467760  407330 start.go:83] releasing machines lock for "functional-253997", held for 1.826077816s
	I1210 06:34:31.467840  407330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:34:31.485284  407330 ssh_runner.go:195] Run: cat /version.json
	I1210 06:34:31.485341  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.485360  407330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:34:31.485410  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.504331  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.505583  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.702850  407330 ssh_runner.go:195] Run: systemctl --version
	I1210 06:34:31.710100  407330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:34:31.751135  407330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:34:31.755552  407330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:34:31.755612  407330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:34:31.763681  407330 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:34:31.763695  407330 start.go:496] detecting cgroup driver to use...
	I1210 06:34:31.763726  407330 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:34:31.763773  407330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:34:31.779177  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:34:31.792657  407330 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:34:31.792726  407330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:34:31.808481  407330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:34:31.821835  407330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:34:31.953412  407330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:34:32.070663  407330 docker.go:234] disabling docker service ...
	I1210 06:34:32.070719  407330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:34:32.089582  407330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:34:32.103903  407330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:34:32.229247  407330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:34:32.354550  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:34:32.368208  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:34:32.383037  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:32.544686  407330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:34:32.544766  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.554538  407330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:34:32.554607  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.563600  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.572445  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.581785  407330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:34:32.589992  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.599257  407330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.607809  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.616790  407330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:34:32.624404  407330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:34:32.631884  407330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:34:32.742959  407330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:34:32.924926  407330 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:34:32.925015  407330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:34:32.931953  407330 start.go:564] Will wait 60s for crictl version
	I1210 06:34:32.932037  407330 ssh_runner.go:195] Run: which crictl
	I1210 06:34:32.936975  407330 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:34:32.972701  407330 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:34:32.972786  407330 ssh_runner.go:195] Run: crio --version
	I1210 06:34:33.008288  407330 ssh_runner.go:195] Run: crio --version
	I1210 06:34:33.045101  407330 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 06:34:33.048270  407330 cli_runner.go:164] Run: docker network inspect functional-253997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:34:33.065511  407330 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:34:33.072736  407330 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 06:34:33.075695  407330 kubeadm.go:884] updating cluster {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:34:33.075981  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:33.225944  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:33.376252  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:33.530247  407330 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:34:33.530325  407330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:34:33.568941  407330 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:34:33.568954  407330 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:34:33.568960  407330 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1210 06:34:33.569060  407330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-253997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:34:33.569145  407330 ssh_runner.go:195] Run: crio config
	I1210 06:34:33.643186  407330 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 06:34:33.643211  407330 cni.go:84] Creating CNI manager for ""
	I1210 06:34:33.643224  407330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:34:33.643242  407330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:34:33.643280  407330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-253997 NodeName:functional-253997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:34:33.643429  407330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-253997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:34:33.643524  407330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:34:33.653419  407330 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:34:33.653495  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:34:33.663141  407330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 06:34:33.678587  407330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:34:33.693949  407330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I1210 06:34:33.710464  407330 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:34:33.714723  407330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:34:33.827439  407330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:34:34.376520  407330 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997 for IP: 192.168.49.2
	I1210 06:34:34.376531  407330 certs.go:195] generating shared ca certs ...
	I1210 06:34:34.376561  407330 certs.go:227] acquiring lock for ca certs: {Name:mk6fdc2cadbd147112797261b2432f4e1e90b685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:34:34.376695  407330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key
	I1210 06:34:34.376739  407330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key
	I1210 06:34:34.376746  407330 certs.go:257] generating profile certs ...
	I1210 06:34:34.376830  407330 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key
	I1210 06:34:34.376883  407330 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key.d56e9423
	I1210 06:34:34.376918  407330 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key
	I1210 06:34:34.377046  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem (1338 bytes)
	W1210 06:34:34.377076  407330 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265_empty.pem, impossibly tiny 0 bytes
	I1210 06:34:34.377083  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:34:34.377112  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:34:34.377138  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:34:34.377165  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem (1675 bytes)
	I1210 06:34:34.377235  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:34:34.377907  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:34:34.400957  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:34:34.422626  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:34:34.444886  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:34:34.463194  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:34:34.485380  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:34:34.504994  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:34:34.523903  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:34:34.542693  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem --> /usr/share/ca-certificates/364265.pem (1338 bytes)
	I1210 06:34:34.560781  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /usr/share/ca-certificates/3642652.pem (1708 bytes)
	I1210 06:34:34.580039  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:34:34.598952  407330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:34:34.612103  407330 ssh_runner.go:195] Run: openssl version
	I1210 06:34:34.618607  407330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.626715  407330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3642652.pem /etc/ssl/certs/3642652.pem
	I1210 06:34:34.634462  407330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.638500  407330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.638572  407330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.680023  407330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:34:34.687891  407330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.695733  407330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:34:34.704338  407330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.708573  407330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.708632  407330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.750214  407330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:34:34.758402  407330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.766563  407330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364265.pem /etc/ssl/certs/364265.pem
	I1210 06:34:34.774837  407330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.779114  407330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.779177  407330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.821136  407330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:34:34.829270  407330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:34:34.833529  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:34:34.876277  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:34:34.917707  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:34:34.959457  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:34:35.001865  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:34:35.044914  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:34:35.086921  407330 kubeadm.go:401] StartCluster: {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:34:35.087016  407330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:34:35.087089  407330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:34:35.117459  407330 cri.go:89] found id: ""
	I1210 06:34:35.117522  407330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:34:35.127607  407330 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:34:35.127629  407330 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:34:35.127685  407330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:34:35.136902  407330 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.137526  407330 kubeconfig.go:125] found "functional-253997" server: "https://192.168.49.2:8441"
	I1210 06:34:35.138779  407330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:34:35.148051  407330 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 06:19:55.285285887 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 06:34:33.703709051 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 06:34:35.148070  407330 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:34:35.148082  407330 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 06:34:35.148140  407330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:34:35.178671  407330 cri.go:89] found id: ""
	I1210 06:34:35.178737  407330 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:34:35.196838  407330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:34:35.205412  407330 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 10 06:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 10 06:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 06:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 10 06:24 /etc/kubernetes/scheduler.conf
	
	I1210 06:34:35.205484  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:34:35.213947  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:34:35.222529  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.222599  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:34:35.230587  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:34:35.239174  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.239260  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:34:35.247436  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:34:35.255726  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.255785  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:34:35.264394  407330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:34:35.273245  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:35.319550  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.241705  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.453815  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.521107  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.566051  407330 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:34:36.566126  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:37.067292  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:37.566512  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:38.066836  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:38.566899  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:39.066341  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:39.566346  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:40.066332  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:40.566372  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:41.066499  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:41.566268  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:42.066346  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:42.567303  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:43.066665  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:43.567003  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:44.067024  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:44.566335  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:45.066417  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:45.567077  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:46.066880  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:46.567080  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:47.067184  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:47.567178  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:48.066963  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:48.566974  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:49.067037  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:49.566287  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:50.066336  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:50.566364  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:51.067235  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:51.566986  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:52.067009  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:52.567206  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:53.067261  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:53.566344  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:54.066310  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:54.566298  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:55.066264  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:55.567074  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:56.066263  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:56.566336  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:57.066335  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:57.566328  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:58.067273  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:58.566628  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:59.066382  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:59.566689  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:00.067148  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:00.566514  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:01.067178  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:01.566354  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:02.066731  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:02.566399  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:03.066319  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:03.566548  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:04.067174  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:04.566325  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:05.066402  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:05.566911  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:06.066322  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:06.566332  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:07.066357  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:07.566349  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:08.066401  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:08.566901  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:09.066304  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:09.566288  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:10.067048  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:10.566583  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:11.066369  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:11.566359  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:12.066308  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:12.566316  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:13.067242  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:13.566381  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:14.066924  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:14.566356  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:15.066288  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:15.566336  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:16.066320  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:16.566227  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:17.066312  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:17.567213  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:18.067248  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:18.566316  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:19.066386  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:19.566330  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:20.066351  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:20.567009  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:21.066262  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:21.566459  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:22.067279  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:22.567207  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:23.066320  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:23.566322  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:24.066326  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:24.567019  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:25.066297  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:25.566495  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:26.066321  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:26.566348  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:27.066383  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:27.566446  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:28.066328  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:28.566352  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:29.066994  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:29.566974  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:30.067021  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:30.566389  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:31.066477  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:31.567070  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:32.067017  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:32.566317  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:33.066608  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:33.566260  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:34.066340  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:34.566882  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:35.066828  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:35.566890  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:36.066318  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:36.566330  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:36.566414  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:36.592227  407330 cri.go:89] found id: ""
	I1210 06:35:36.592241  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.592248  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:36.592253  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:36.592312  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:36.622028  407330 cri.go:89] found id: ""
	I1210 06:35:36.622043  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.622051  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:36.622056  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:36.622116  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:36.648208  407330 cri.go:89] found id: ""
	I1210 06:35:36.648226  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.648234  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:36.648240  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:36.648298  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:36.674377  407330 cri.go:89] found id: ""
	I1210 06:35:36.674397  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.674405  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:36.674410  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:36.674471  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:36.699772  407330 cri.go:89] found id: ""
	I1210 06:35:36.699787  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.699794  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:36.699801  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:36.699864  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:36.724815  407330 cri.go:89] found id: ""
	I1210 06:35:36.724830  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.724838  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:36.724843  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:36.724900  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:36.750775  407330 cri.go:89] found id: ""
	I1210 06:35:36.750791  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.750798  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:36.750806  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:36.750820  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:36.820446  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:36.820465  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:36.835955  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:36.835970  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:36.903411  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:36.895055   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.895887   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.897562   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.898101   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.899639   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:36.895055   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.895887   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.897562   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.898101   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.899639   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:36.903424  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:36.903435  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:36.979747  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:36.979768  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:39.514581  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:39.524909  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:39.524970  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:39.550102  407330 cri.go:89] found id: ""
	I1210 06:35:39.550116  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.550124  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:39.550129  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:39.550187  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:39.576588  407330 cri.go:89] found id: ""
	I1210 06:35:39.576602  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.576619  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:39.576624  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:39.576690  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:39.603288  407330 cri.go:89] found id: ""
	I1210 06:35:39.603303  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.603310  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:39.603315  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:39.603373  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:39.632338  407330 cri.go:89] found id: ""
	I1210 06:35:39.632353  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.632360  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:39.632365  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:39.632420  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:39.657752  407330 cri.go:89] found id: ""
	I1210 06:35:39.657767  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.657773  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:39.657779  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:39.657844  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:39.683212  407330 cri.go:89] found id: ""
	I1210 06:35:39.683226  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.683234  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:39.683240  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:39.683300  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:39.708413  407330 cri.go:89] found id: ""
	I1210 06:35:39.708437  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.708445  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:39.708453  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:39.708464  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:39.775637  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:39.775659  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:39.791086  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:39.791102  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:39.857652  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:39.849379   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.849925   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.851713   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.852318   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.853965   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:39.849379   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.849925   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.851713   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.852318   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.853965   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:39.857663  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:39.857675  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:39.935547  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:39.935569  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:42.469375  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:42.480182  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:42.480240  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:42.506760  407330 cri.go:89] found id: ""
	I1210 06:35:42.506774  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.506781  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:42.506786  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:42.506843  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:42.536234  407330 cri.go:89] found id: ""
	I1210 06:35:42.536249  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.536256  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:42.536261  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:42.536329  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:42.566988  407330 cri.go:89] found id: ""
	I1210 06:35:42.567003  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.567010  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:42.567015  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:42.567076  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:42.592607  407330 cri.go:89] found id: ""
	I1210 06:35:42.592630  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.592638  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:42.592643  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:42.592709  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:42.617649  407330 cri.go:89] found id: ""
	I1210 06:35:42.617664  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.617671  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:42.617676  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:42.617734  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:42.643410  407330 cri.go:89] found id: ""
	I1210 06:35:42.643425  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.643432  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:42.643437  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:42.643503  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:42.669531  407330 cri.go:89] found id: ""
	I1210 06:35:42.669546  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.669553  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:42.669561  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:42.669571  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:42.735924  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:42.735944  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:42.751205  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:42.751229  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:42.816158  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:42.807932   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.808831   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810433   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810980   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.812516   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:42.807932   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.808831   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810433   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810980   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.812516   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:42.816169  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:42.816179  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:42.893021  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:42.893042  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:45.426224  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:45.438079  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:45.438148  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:45.472267  407330 cri.go:89] found id: ""
	I1210 06:35:45.472291  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.472299  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:45.472306  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:45.472384  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:45.502901  407330 cri.go:89] found id: ""
	I1210 06:35:45.502931  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.502939  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:45.502945  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:45.503008  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:45.529442  407330 cri.go:89] found id: ""
	I1210 06:35:45.529458  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.529465  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:45.529470  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:45.529534  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:45.555125  407330 cri.go:89] found id: ""
	I1210 06:35:45.555139  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.555159  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:45.555165  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:45.555243  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:45.580961  407330 cri.go:89] found id: ""
	I1210 06:35:45.580976  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.580994  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:45.580999  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:45.581057  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:45.610965  407330 cri.go:89] found id: ""
	I1210 06:35:45.610980  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.610987  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:45.610993  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:45.611059  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:45.637091  407330 cri.go:89] found id: ""
	I1210 06:35:45.637105  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.637120  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:45.637128  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:45.637137  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:45.715413  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:45.715435  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:45.749154  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:45.749171  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:45.815517  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:45.815543  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:45.831429  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:45.831446  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:45.906374  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:45.898629   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.899395   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.900929   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.901506   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.902971   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:45.898629   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.899395   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.900929   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.901506   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.902971   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:48.406578  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:48.421255  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:48.421324  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:48.447131  407330 cri.go:89] found id: ""
	I1210 06:35:48.447146  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.447153  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:48.447159  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:48.447220  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:48.473099  407330 cri.go:89] found id: ""
	I1210 06:35:48.473122  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.473129  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:48.473134  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:48.473222  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:48.498597  407330 cri.go:89] found id: ""
	I1210 06:35:48.498612  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.498619  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:48.498624  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:48.498681  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:48.523362  407330 cri.go:89] found id: ""
	I1210 06:35:48.523377  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.523384  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:48.523389  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:48.523453  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:48.551807  407330 cri.go:89] found id: ""
	I1210 06:35:48.551821  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.551835  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:48.551840  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:48.551900  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:48.581473  407330 cri.go:89] found id: ""
	I1210 06:35:48.581487  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.581502  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:48.581509  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:48.581565  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:48.607499  407330 cri.go:89] found id: ""
	I1210 06:35:48.607514  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.607521  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:48.607529  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:48.607539  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:48.673753  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:48.673774  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:48.688837  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:48.688853  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:48.751707  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:48.744029   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.744581   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746111   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746590   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.748085   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:48.744029   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.744581   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746111   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746590   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.748085   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:48.751717  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:48.751727  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:48.828663  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:48.828686  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:51.363003  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:51.376217  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:51.376312  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:51.407718  407330 cri.go:89] found id: ""
	I1210 06:35:51.407732  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.407755  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:51.407762  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:51.407874  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:51.444235  407330 cri.go:89] found id: ""
	I1210 06:35:51.444269  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.444286  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:51.444295  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:51.444379  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:51.474869  407330 cri.go:89] found id: ""
	I1210 06:35:51.474883  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.474890  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:51.474895  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:51.474953  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:51.504739  407330 cri.go:89] found id: ""
	I1210 06:35:51.504764  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.504772  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:51.504777  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:51.504846  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:51.532353  407330 cri.go:89] found id: ""
	I1210 06:35:51.532368  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.532375  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:51.532380  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:51.532455  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:51.557565  407330 cri.go:89] found id: ""
	I1210 06:35:51.557579  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.557586  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:51.557591  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:51.557661  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:51.583285  407330 cri.go:89] found id: ""
	I1210 06:35:51.583300  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.583307  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:51.583315  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:51.583325  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:51.613387  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:51.613404  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:51.680028  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:51.680049  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:51.695935  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:51.695952  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:51.759280  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:51.750909   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.751756   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753321   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753933   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.755583   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:51.750909   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.751756   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753321   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753933   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.755583   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:51.759290  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:51.759301  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:54.338519  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:54.348725  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:54.348780  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:54.383598  407330 cri.go:89] found id: ""
	I1210 06:35:54.383626  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.383634  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:54.383639  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:54.383707  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:54.410152  407330 cri.go:89] found id: ""
	I1210 06:35:54.410180  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.410187  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:54.410192  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:54.410264  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:54.438326  407330 cri.go:89] found id: ""
	I1210 06:35:54.438352  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.438360  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:54.438365  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:54.438441  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:54.465850  407330 cri.go:89] found id: ""
	I1210 06:35:54.465864  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.465871  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:54.465876  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:54.465931  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:54.491709  407330 cri.go:89] found id: ""
	I1210 06:35:54.491722  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.491729  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:54.491734  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:54.491790  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:54.523425  407330 cri.go:89] found id: ""
	I1210 06:35:54.523440  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.523447  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:54.523452  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:54.523548  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:54.550380  407330 cri.go:89] found id: ""
	I1210 06:35:54.550394  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.550411  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:54.550438  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:54.550449  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:54.582306  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:54.582324  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:54.647908  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:54.647927  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:54.663750  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:54.663772  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:54.730309  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:54.719958   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.720734   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.722315   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.724777   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.725551   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:54.719958   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.720734   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.722315   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.724777   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.725551   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:54.730320  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:54.730331  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:57.308665  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:57.320319  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:57.320392  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:57.345562  407330 cri.go:89] found id: ""
	I1210 06:35:57.345577  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.345584  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:57.345589  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:57.345647  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:57.371859  407330 cri.go:89] found id: ""
	I1210 06:35:57.371874  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.371897  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:57.371903  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:57.371970  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:57.406362  407330 cri.go:89] found id: ""
	I1210 06:35:57.406377  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.406384  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:57.406389  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:57.406463  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:57.436087  407330 cri.go:89] found id: ""
	I1210 06:35:57.436103  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.436110  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:57.436116  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:57.436187  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:57.465764  407330 cri.go:89] found id: ""
	I1210 06:35:57.465779  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.465786  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:57.465791  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:57.465867  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:57.494039  407330 cri.go:89] found id: ""
	I1210 06:35:57.494065  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.494073  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:57.494078  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:57.494145  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:57.520097  407330 cri.go:89] found id: ""
	I1210 06:35:57.520123  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.520131  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:57.520140  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:57.520151  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:57.586496  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:57.586517  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:57.602111  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:57.602128  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:57.668344  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:57.660356   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.660757   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.662627   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.663077   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.664745   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:57.660356   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.660757   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.662627   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.663077   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.664745   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:57.668356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:57.668367  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:57.746160  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:57.746183  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:00.275712  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:00.321874  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:00.321955  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:00.384327  407330 cri.go:89] found id: ""
	I1210 06:36:00.384343  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.384351  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:00.384357  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:00.384451  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:00.459817  407330 cri.go:89] found id: ""
	I1210 06:36:00.459834  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.459842  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:00.459848  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:00.459916  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:00.497674  407330 cri.go:89] found id: ""
	I1210 06:36:00.497690  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.497698  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:00.497704  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:00.497774  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:00.541499  407330 cri.go:89] found id: ""
	I1210 06:36:00.541516  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.541525  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:00.541531  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:00.541613  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:00.581412  407330 cri.go:89] found id: ""
	I1210 06:36:00.581436  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.581463  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:00.581468  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:00.581541  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:00.610779  407330 cri.go:89] found id: ""
	I1210 06:36:00.610795  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.610802  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:00.610807  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:00.610870  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:00.642543  407330 cri.go:89] found id: ""
	I1210 06:36:00.642559  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.642567  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:00.642575  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:00.642586  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:00.710346  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:00.710367  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:00.725875  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:00.725894  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:00.793058  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:00.784782   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.785552   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787181   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787496   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.789654   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:00.784782   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.785552   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787181   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787496   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.789654   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:00.793071  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:00.793084  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:00.875916  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:00.875944  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:03.406417  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:03.419044  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:03.419120  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:03.447628  407330 cri.go:89] found id: ""
	I1210 06:36:03.447658  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.447666  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:03.447671  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:03.447737  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:03.474253  407330 cri.go:89] found id: ""
	I1210 06:36:03.474266  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.474274  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:03.474279  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:03.474336  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:03.500678  407330 cri.go:89] found id: ""
	I1210 06:36:03.500694  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.500701  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:03.500707  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:03.500768  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:03.528282  407330 cri.go:89] found id: ""
	I1210 06:36:03.528298  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.528306  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:03.528311  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:03.528373  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:03.556656  407330 cri.go:89] found id: ""
	I1210 06:36:03.556670  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.556678  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:03.556683  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:03.556743  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:03.583735  407330 cri.go:89] found id: ""
	I1210 06:36:03.583750  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.583758  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:03.583763  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:03.583819  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:03.609076  407330 cri.go:89] found id: ""
	I1210 06:36:03.609090  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.609097  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:03.609105  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:03.609115  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:03.686817  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:03.686837  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:03.716372  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:03.716389  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:03.784121  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:03.784140  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:03.799951  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:03.799970  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:03.868350  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:03.860456   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.860967   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.862601   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.863024   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.864532   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:03.860456   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.860967   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.862601   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.863024   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.864532   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:06.369008  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:06.379783  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:06.379844  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:06.413424  407330 cri.go:89] found id: ""
	I1210 06:36:06.413438  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.413452  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:06.413457  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:06.413518  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:06.455432  407330 cri.go:89] found id: ""
	I1210 06:36:06.455446  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.455453  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:06.455458  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:06.455518  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:06.484987  407330 cri.go:89] found id: ""
	I1210 06:36:06.485002  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.485011  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:06.485016  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:06.485079  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:06.510864  407330 cri.go:89] found id: ""
	I1210 06:36:06.510879  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.510887  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:06.510892  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:06.510955  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:06.536841  407330 cri.go:89] found id: ""
	I1210 06:36:06.536856  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.536863  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:06.536868  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:06.536928  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:06.563896  407330 cri.go:89] found id: ""
	I1210 06:36:06.563911  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.563918  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:06.563923  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:06.563982  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:06.588959  407330 cri.go:89] found id: ""
	I1210 06:36:06.588973  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.588981  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:06.588988  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:06.588998  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:06.665721  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:06.665743  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:06.694509  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:06.694527  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:06.761392  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:06.761412  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:06.776431  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:06.776448  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:06.839723  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:06.830973   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.831471   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.833376   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.834107   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.835971   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:06.830973   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.831471   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.833376   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.834107   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.835971   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:09.340200  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:09.350423  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:09.350492  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:09.377180  407330 cri.go:89] found id: ""
	I1210 06:36:09.377216  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.377224  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:09.377229  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:09.377296  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:09.408780  407330 cri.go:89] found id: ""
	I1210 06:36:09.408794  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.408810  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:09.408817  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:09.408891  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:09.439014  407330 cri.go:89] found id: ""
	I1210 06:36:09.439028  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.439046  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:09.439051  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:09.439123  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:09.465550  407330 cri.go:89] found id: ""
	I1210 06:36:09.465570  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.465577  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:09.465582  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:09.465640  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:09.495077  407330 cri.go:89] found id: ""
	I1210 06:36:09.495092  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.495099  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:09.495104  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:09.495160  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:09.524259  407330 cri.go:89] found id: ""
	I1210 06:36:09.524283  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.524291  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:09.524296  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:09.524365  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:09.552397  407330 cri.go:89] found id: ""
	I1210 06:36:09.552411  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.552428  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:09.552435  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:09.552445  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:09.617989  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:09.618009  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:09.633375  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:09.633391  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:09.703345  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:09.695844   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.696518   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.697933   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.698390   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.699860   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:09.695844   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.696518   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.697933   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.698390   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.699860   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:09.703356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:09.703368  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:09.780941  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:09.780963  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:12.311981  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:12.322588  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:12.322649  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:12.348408  407330 cri.go:89] found id: ""
	I1210 06:36:12.348423  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.348430  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:12.348436  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:12.348494  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:12.381450  407330 cri.go:89] found id: ""
	I1210 06:36:12.381465  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.381492  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:12.381497  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:12.381565  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:12.421286  407330 cri.go:89] found id: ""
	I1210 06:36:12.421301  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.421309  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:12.421314  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:12.421381  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:12.453573  407330 cri.go:89] found id: ""
	I1210 06:36:12.453598  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.453605  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:12.453611  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:12.453677  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:12.480195  407330 cri.go:89] found id: ""
	I1210 06:36:12.480210  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.480218  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:12.480225  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:12.480290  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:12.505648  407330 cri.go:89] found id: ""
	I1210 06:36:12.505662  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.505669  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:12.505674  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:12.505732  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:12.532083  407330 cri.go:89] found id: ""
	I1210 06:36:12.532097  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.532104  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:12.532112  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:12.532125  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:12.598623  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:12.598646  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:12.614317  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:12.614336  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:12.686805  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:12.678148   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.678809   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.680931   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.681479   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.683127   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:12.678148   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.678809   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.680931   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.681479   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.683127   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:12.686817  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:12.686828  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:12.768698  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:12.768719  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:15.302091  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:15.312582  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:15.312644  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:15.338874  407330 cri.go:89] found id: ""
	I1210 06:36:15.338889  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.338897  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:15.338902  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:15.338962  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:15.365600  407330 cri.go:89] found id: ""
	I1210 06:36:15.365614  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.365621  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:15.365627  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:15.365687  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:15.405324  407330 cri.go:89] found id: ""
	I1210 06:36:15.405339  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.405346  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:15.405352  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:15.405411  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:15.438276  407330 cri.go:89] found id: ""
	I1210 06:36:15.438290  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.438298  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:15.438304  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:15.438362  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:15.465120  407330 cri.go:89] found id: ""
	I1210 06:36:15.465135  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.465142  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:15.465147  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:15.465226  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:15.490880  407330 cri.go:89] found id: ""
	I1210 06:36:15.490894  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.490901  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:15.490906  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:15.490968  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:15.517171  407330 cri.go:89] found id: ""
	I1210 06:36:15.517208  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.517215  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:15.517224  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:15.517235  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:15.580940  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:15.573253   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.573879   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575387   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575909   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.577385   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:15.573253   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.573879   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575387   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575909   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.577385   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:15.580950  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:15.580962  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:15.657832  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:15.657853  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:15.690721  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:15.690738  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:15.755970  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:15.755993  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:18.272507  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:18.282762  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:18.282822  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:18.312952  407330 cri.go:89] found id: ""
	I1210 06:36:18.312966  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.312980  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:18.312986  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:18.313048  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:18.340174  407330 cri.go:89] found id: ""
	I1210 06:36:18.340189  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.340196  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:18.340201  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:18.340260  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:18.365096  407330 cri.go:89] found id: ""
	I1210 06:36:18.365111  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.365118  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:18.365122  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:18.365178  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:18.408189  407330 cri.go:89] found id: ""
	I1210 06:36:18.408203  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.408210  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:18.408215  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:18.408271  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:18.439330  407330 cri.go:89] found id: ""
	I1210 06:36:18.439344  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.439351  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:18.439357  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:18.439413  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:18.471472  407330 cri.go:89] found id: ""
	I1210 06:36:18.471486  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.471493  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:18.471498  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:18.471561  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:18.499541  407330 cri.go:89] found id: ""
	I1210 06:36:18.499555  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.499562  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:18.499569  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:18.499579  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:18.566266  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:18.566288  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:18.581335  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:18.581351  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:18.649633  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:18.642053   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.642777   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644268   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644684   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.646194   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:18.642053   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.642777   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644268   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644684   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.646194   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:18.649644  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:18.649657  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:18.727427  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:18.727447  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:21.256173  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:21.266342  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:21.266401  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:21.291198  407330 cri.go:89] found id: ""
	I1210 06:36:21.291212  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.291219  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:21.291224  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:21.291285  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:21.317809  407330 cri.go:89] found id: ""
	I1210 06:36:21.317823  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.317831  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:21.317836  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:21.317893  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:21.349023  407330 cri.go:89] found id: ""
	I1210 06:36:21.349038  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.349046  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:21.349051  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:21.349112  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:21.377021  407330 cri.go:89] found id: ""
	I1210 06:36:21.377036  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.377043  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:21.377049  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:21.377128  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:21.414828  407330 cri.go:89] found id: ""
	I1210 06:36:21.414843  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.414853  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:21.414858  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:21.414924  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:21.448750  407330 cri.go:89] found id: ""
	I1210 06:36:21.448765  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.448772  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:21.448778  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:21.448836  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:21.475060  407330 cri.go:89] found id: ""
	I1210 06:36:21.475082  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.475089  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:21.475097  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:21.475109  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:21.544320  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:21.544350  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:21.559538  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:21.559554  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:21.623730  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:21.615598   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.616210   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.617863   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.618444   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.620054   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:21.615598   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.616210   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.617863   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.618444   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.620054   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:21.623741  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:21.623754  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:21.703706  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:21.703726  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:24.232360  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:24.242917  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:24.242977  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:24.272666  407330 cri.go:89] found id: ""
	I1210 06:36:24.272681  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.272688  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:24.272693  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:24.272762  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:24.298359  407330 cri.go:89] found id: ""
	I1210 06:36:24.298374  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.298381  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:24.298386  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:24.298448  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:24.324096  407330 cri.go:89] found id: ""
	I1210 06:36:24.324110  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.324117  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:24.324122  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:24.324180  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:24.352195  407330 cri.go:89] found id: ""
	I1210 06:36:24.352210  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.352217  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:24.352223  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:24.352281  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:24.392094  407330 cri.go:89] found id: ""
	I1210 06:36:24.392109  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.392116  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:24.392121  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:24.392180  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:24.433688  407330 cri.go:89] found id: ""
	I1210 06:36:24.433702  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.433716  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:24.433721  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:24.433780  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:24.461088  407330 cri.go:89] found id: ""
	I1210 06:36:24.461103  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.461110  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:24.461118  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:24.461140  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:24.491187  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:24.491203  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:24.557420  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:24.557442  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:24.572719  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:24.572736  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:24.638182  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:24.629865   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.630603   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632247   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632788   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.634516   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:24.629865   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.630603   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632247   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632788   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.634516   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:24.638192  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:24.638204  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:27.215263  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:27.225429  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:27.225490  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:27.250600  407330 cri.go:89] found id: ""
	I1210 06:36:27.250623  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.250630  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:27.250636  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:27.250696  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:27.275244  407330 cri.go:89] found id: ""
	I1210 06:36:27.275258  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.275266  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:27.275271  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:27.275337  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:27.303675  407330 cri.go:89] found id: ""
	I1210 06:36:27.303699  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.303707  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:27.303713  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:27.303779  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:27.329179  407330 cri.go:89] found id: ""
	I1210 06:36:27.329211  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.329219  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:27.329225  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:27.329294  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:27.354254  407330 cri.go:89] found id: ""
	I1210 06:36:27.354269  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.354276  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:27.354282  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:27.354340  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:27.386524  407330 cri.go:89] found id: ""
	I1210 06:36:27.386539  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.386546  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:27.386552  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:27.386608  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:27.419941  407330 cri.go:89] found id: ""
	I1210 06:36:27.419964  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.419972  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:27.419980  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:27.419990  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:27.489413  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:27.489436  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:27.504358  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:27.504375  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:27.572076  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:27.564125   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.564752   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.566500   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.567122   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.568559   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:27.564125   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.564752   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.566500   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.567122   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.568559   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:27.572087  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:27.572097  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:27.652684  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:27.652704  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:30.186931  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:30.198655  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:30.198720  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:30.226217  407330 cri.go:89] found id: ""
	I1210 06:36:30.226239  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.226247  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:30.226252  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:30.226319  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:30.254245  407330 cri.go:89] found id: ""
	I1210 06:36:30.254261  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.254268  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:30.254273  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:30.254331  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:30.282139  407330 cri.go:89] found id: ""
	I1210 06:36:30.282154  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.282162  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:30.282167  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:30.282227  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:30.308968  407330 cri.go:89] found id: ""
	I1210 06:36:30.308992  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.308999  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:30.309005  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:30.309076  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:30.337543  407330 cri.go:89] found id: ""
	I1210 06:36:30.337558  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.337565  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:30.337570  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:30.337630  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:30.366448  407330 cri.go:89] found id: ""
	I1210 06:36:30.366463  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.366477  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:30.366483  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:30.366542  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:30.404619  407330 cri.go:89] found id: ""
	I1210 06:36:30.404641  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.404649  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:30.404656  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:30.404667  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:30.484453  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:30.484481  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:30.499101  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:30.499118  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:30.561567  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:30.553438   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.554141   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.555797   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.556329   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.557890   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:30.553438   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.554141   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.555797   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.556329   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.557890   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:30.561578  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:30.561589  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:30.638801  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:30.638822  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:33.169370  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:33.179597  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:33.179662  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:33.204216  407330 cri.go:89] found id: ""
	I1210 06:36:33.204230  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.204246  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:33.204252  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:33.204309  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:33.229498  407330 cri.go:89] found id: ""
	I1210 06:36:33.229512  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.229519  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:33.229524  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:33.229580  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:33.255490  407330 cri.go:89] found id: ""
	I1210 06:36:33.255505  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.255521  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:33.255527  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:33.255593  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:33.283936  407330 cri.go:89] found id: ""
	I1210 06:36:33.283960  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.283968  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:33.283974  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:33.284052  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:33.308959  407330 cri.go:89] found id: ""
	I1210 06:36:33.308974  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.308984  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:33.308990  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:33.309058  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:33.335830  407330 cri.go:89] found id: ""
	I1210 06:36:33.335853  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.335860  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:33.335866  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:33.335936  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:33.362154  407330 cri.go:89] found id: ""
	I1210 06:36:33.362179  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.362187  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:33.362196  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:33.362208  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:33.410395  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:33.410413  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:33.480770  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:33.480789  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:33.496511  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:33.496527  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:33.563939  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:33.556146   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.556663   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558166   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558668   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.560192   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:33.556146   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.556663   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558166   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558668   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.560192   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:33.563950  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:33.563961  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:36.141828  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:36.152734  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:36.152795  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:36.178688  407330 cri.go:89] found id: ""
	I1210 06:36:36.178703  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.178710  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:36.178716  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:36.178776  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:36.205685  407330 cri.go:89] found id: ""
	I1210 06:36:36.205700  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.205707  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:36.205712  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:36.205771  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:36.231383  407330 cri.go:89] found id: ""
	I1210 06:36:36.231398  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.231411  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:36.231418  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:36.231480  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:36.257291  407330 cri.go:89] found id: ""
	I1210 06:36:36.257316  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.257324  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:36.257329  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:36.257400  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:36.287683  407330 cri.go:89] found id: ""
	I1210 06:36:36.287697  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.287704  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:36.287709  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:36.287767  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:36.313785  407330 cri.go:89] found id: ""
	I1210 06:36:36.313799  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.313807  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:36.313812  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:36.313871  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:36.339325  407330 cri.go:89] found id: ""
	I1210 06:36:36.339339  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.339347  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:36.339356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:36.339369  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:36.421249  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:36.421268  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:36.458225  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:36.458243  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:36.528365  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:36.528384  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:36.544683  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:36.544705  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:36.611624  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:36.602655   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.603473   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605101   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605888   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.607572   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:36.602655   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.603473   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605101   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605888   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.607572   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:39.111891  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:39.122952  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:39.123016  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:39.151788  407330 cri.go:89] found id: ""
	I1210 06:36:39.151817  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.151825  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:39.151831  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:39.151902  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:39.176656  407330 cri.go:89] found id: ""
	I1210 06:36:39.176679  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.176686  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:39.176691  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:39.176759  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:39.203206  407330 cri.go:89] found id: ""
	I1210 06:36:39.203220  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.203227  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:39.203233  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:39.203289  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:39.228848  407330 cri.go:89] found id: ""
	I1210 06:36:39.228862  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.228869  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:39.228875  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:39.228933  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:39.258475  407330 cri.go:89] found id: ""
	I1210 06:36:39.258512  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.258519  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:39.258524  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:39.258589  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:39.283240  407330 cri.go:89] found id: ""
	I1210 06:36:39.283254  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.283261  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:39.283268  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:39.283328  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:39.312591  407330 cri.go:89] found id: ""
	I1210 06:36:39.312604  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.312611  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:39.312619  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:39.312629  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:39.380680  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:39.380703  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:39.397793  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:39.397809  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:39.469117  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:39.460579   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.461325   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463132   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463721   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.465358   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:39.460579   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.461325   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463132   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463721   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.465358   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:39.469128  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:39.469139  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:39.546111  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:39.546131  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:42.076431  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:42.089265  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:42.089335  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:42.121496  407330 cri.go:89] found id: ""
	I1210 06:36:42.121512  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.121520  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:42.121526  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:42.121593  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:42.151688  407330 cri.go:89] found id: ""
	I1210 06:36:42.151704  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.151712  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:42.151717  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:42.151784  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:42.190925  407330 cri.go:89] found id: ""
	I1210 06:36:42.190942  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.190949  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:42.190955  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:42.191063  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:42.225827  407330 cri.go:89] found id: ""
	I1210 06:36:42.225849  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.225857  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:42.225863  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:42.225931  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:42.254453  407330 cri.go:89] found id: ""
	I1210 06:36:42.254467  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.254475  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:42.254480  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:42.254557  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:42.281514  407330 cri.go:89] found id: ""
	I1210 06:36:42.281536  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.281545  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:42.281550  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:42.281615  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:42.309082  407330 cri.go:89] found id: ""
	I1210 06:36:42.309097  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.309105  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:42.309115  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:42.309127  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:42.325376  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:42.325393  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:42.394971  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:42.386397   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.387396   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389262   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389603   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.390932   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:42.386397   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.387396   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389262   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389603   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.390932   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:42.394982  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:42.394993  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:42.480444  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:42.480463  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:42.513077  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:42.513094  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:45.082079  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:45.095928  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:45.096005  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:45.136147  407330 cri.go:89] found id: ""
	I1210 06:36:45.136165  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.136172  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:45.136178  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:45.136321  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:45.171561  407330 cri.go:89] found id: ""
	I1210 06:36:45.171577  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.171584  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:45.171590  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:45.171667  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:45.214225  407330 cri.go:89] found id: ""
	I1210 06:36:45.214243  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.214277  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:45.214282  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:45.214364  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:45.274027  407330 cri.go:89] found id: ""
	I1210 06:36:45.274044  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.274052  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:45.274058  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:45.274128  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:45.321536  407330 cri.go:89] found id: ""
	I1210 06:36:45.321553  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.321561  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:45.321567  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:45.321719  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:45.355270  407330 cri.go:89] found id: ""
	I1210 06:36:45.355285  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.355303  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:45.355310  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:45.355386  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:45.388777  407330 cri.go:89] found id: ""
	I1210 06:36:45.388801  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.388809  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:45.388817  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:45.388827  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:45.478699  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:45.478723  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:45.507903  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:45.507921  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:45.575844  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:45.575864  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:45.591861  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:45.591885  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:45.656312  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:45.648123   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.648663   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650406   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650984   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.652724   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:45.648123   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.648663   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650406   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650984   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.652724   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:48.156556  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:48.166976  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:48.167036  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:48.192782  407330 cri.go:89] found id: ""
	I1210 06:36:48.192807  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.192817  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:48.192824  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:48.192889  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:48.218586  407330 cri.go:89] found id: ""
	I1210 06:36:48.218600  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.218607  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:48.218623  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:48.218682  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:48.244757  407330 cri.go:89] found id: ""
	I1210 06:36:48.244771  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.244778  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:48.244783  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:48.244841  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:48.271671  407330 cri.go:89] found id: ""
	I1210 06:36:48.271685  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.271692  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:48.271697  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:48.271756  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:48.298466  407330 cri.go:89] found id: ""
	I1210 06:36:48.298480  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.298487  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:48.298493  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:48.298603  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:48.324794  407330 cri.go:89] found id: ""
	I1210 06:36:48.324808  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.324825  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:48.324830  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:48.324888  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:48.351036  407330 cri.go:89] found id: ""
	I1210 06:36:48.351051  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.351058  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:48.351065  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:48.351076  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:48.384287  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:48.384303  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:48.462134  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:48.462154  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:48.477421  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:48.477439  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:48.544257  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:48.535925   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.536728   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538380   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538978   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.540777   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:48.535925   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.536728   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538380   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538978   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.540777   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:48.544268  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:48.544279  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:51.122102  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:51.133691  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:51.133753  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:51.161091  407330 cri.go:89] found id: ""
	I1210 06:36:51.161106  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.161113  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:51.161119  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:51.161217  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:51.189850  407330 cri.go:89] found id: ""
	I1210 06:36:51.189865  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.189872  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:51.189877  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:51.189944  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:51.215676  407330 cri.go:89] found id: ""
	I1210 06:36:51.215691  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.215698  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:51.215703  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:51.215763  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:51.241638  407330 cri.go:89] found id: ""
	I1210 06:36:51.241653  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.241660  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:51.241666  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:51.241728  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:51.266737  407330 cri.go:89] found id: ""
	I1210 06:36:51.266752  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.266759  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:51.266764  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:51.266823  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:51.291896  407330 cri.go:89] found id: ""
	I1210 06:36:51.291911  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.291918  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:51.291923  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:51.291982  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:51.317807  407330 cri.go:89] found id: ""
	I1210 06:36:51.317823  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.317830  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:51.317838  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:51.317849  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:51.385260  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:51.385280  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:51.400443  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:51.400459  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:51.479768  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:51.472416   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.472835   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474317   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474686   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.476122   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:51.472416   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.472835   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474317   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474686   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.476122   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:51.479778  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:51.479789  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:51.556275  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:51.556295  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:54.087759  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:54.098770  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:54.098837  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:54.124003  407330 cri.go:89] found id: ""
	I1210 06:36:54.124017  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.124025  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:54.124030  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:54.124091  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:54.150185  407330 cri.go:89] found id: ""
	I1210 06:36:54.150200  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.150207  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:54.150213  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:54.150272  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:54.177121  407330 cri.go:89] found id: ""
	I1210 06:36:54.177135  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.177143  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:54.177148  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:54.177248  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:54.202926  407330 cri.go:89] found id: ""
	I1210 06:36:54.202941  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.202948  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:54.202953  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:54.203013  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:54.232186  407330 cri.go:89] found id: ""
	I1210 06:36:54.232200  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.232215  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:54.232221  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:54.232291  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:54.257570  407330 cri.go:89] found id: ""
	I1210 06:36:54.257584  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.257592  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:54.257597  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:54.257656  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:54.282060  407330 cri.go:89] found id: ""
	I1210 06:36:54.282074  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.282081  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:54.282088  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:54.282099  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:54.347704  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:54.347728  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:54.362634  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:54.362652  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:54.450702  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:54.442872   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.443270   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.444790   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.445114   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.446658   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:54.442872   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.443270   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.444790   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.445114   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.446658   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:54.450713  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:54.450723  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:54.528465  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:54.528487  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:57.060906  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:57.071228  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:57.071304  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:57.096846  407330 cri.go:89] found id: ""
	I1210 06:36:57.096859  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.096867  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:57.096872  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:57.096932  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:57.122828  407330 cri.go:89] found id: ""
	I1210 06:36:57.122845  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.122852  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:57.122858  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:57.122918  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:57.154708  407330 cri.go:89] found id: ""
	I1210 06:36:57.154723  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.154730  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:57.154736  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:57.154798  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:57.181521  407330 cri.go:89] found id: ""
	I1210 06:36:57.181543  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.181550  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:57.181556  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:57.181620  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:57.206722  407330 cri.go:89] found id: ""
	I1210 06:36:57.206736  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.206743  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:57.206749  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:57.206811  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:57.232129  407330 cri.go:89] found id: ""
	I1210 06:36:57.232143  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.232150  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:57.232155  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:57.232212  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:57.258044  407330 cri.go:89] found id: ""
	I1210 06:36:57.258057  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.258064  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:57.258071  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:57.258081  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:57.285624  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:57.285640  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:57.351757  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:57.351778  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:57.367138  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:57.367157  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:57.458560  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:57.450875   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.451498   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453015   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453535   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.454978   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:57.450875   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.451498   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453015   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453535   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.454978   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:57.458571  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:57.458582  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:00.035650  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:00.112450  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:00.112528  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:00.233350  407330 cri.go:89] found id: ""
	I1210 06:37:00.233368  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.233377  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:00.233383  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:00.233454  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:00.328120  407330 cri.go:89] found id: ""
	I1210 06:37:00.328136  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.328144  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:00.328150  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:00.328216  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:00.369964  407330 cri.go:89] found id: ""
	I1210 06:37:00.369981  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.369989  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:00.369995  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:00.370065  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:00.412610  407330 cri.go:89] found id: ""
	I1210 06:37:00.412628  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.412636  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:00.412642  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:00.412717  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:00.458193  407330 cri.go:89] found id: ""
	I1210 06:37:00.458212  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.458220  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:00.458225  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:00.458300  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:00.486825  407330 cri.go:89] found id: ""
	I1210 06:37:00.486840  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.486848  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:00.486853  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:00.486912  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:00.514588  407330 cri.go:89] found id: ""
	I1210 06:37:00.514604  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.514612  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:00.514631  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:00.514643  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:00.544788  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:00.544807  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:00.611036  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:00.611058  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:00.625887  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:00.625904  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:00.692620  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:00.684863   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.685709   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687299   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687611   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.689091   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:00.684863   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.685709   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687299   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687611   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.689091   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:00.692631  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:00.692642  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:03.270067  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:03.280541  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:03.280604  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:03.306695  407330 cri.go:89] found id: ""
	I1210 06:37:03.306710  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.306718  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:03.306724  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:03.306788  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:03.335215  407330 cri.go:89] found id: ""
	I1210 06:37:03.335230  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.335237  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:03.335243  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:03.335302  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:03.366128  407330 cri.go:89] found id: ""
	I1210 06:37:03.366143  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.366150  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:03.366155  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:03.366214  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:03.407867  407330 cri.go:89] found id: ""
	I1210 06:37:03.407883  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.407891  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:03.407896  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:03.407957  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:03.439688  407330 cri.go:89] found id: ""
	I1210 06:37:03.439703  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.439710  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:03.439716  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:03.439776  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:03.470617  407330 cri.go:89] found id: ""
	I1210 06:37:03.470633  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.470640  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:03.470645  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:03.470708  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:03.495476  407330 cri.go:89] found id: ""
	I1210 06:37:03.495491  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.495498  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:03.495506  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:03.495516  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:03.562017  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:03.562037  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:03.577764  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:03.577782  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:03.644175  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:03.636471   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.637208   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.638686   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.639152   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.640632   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:03.636471   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.637208   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.638686   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.639152   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.640632   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:03.644187  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:03.644198  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:03.721903  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:03.721925  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:06.250929  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:06.261704  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:06.261767  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:06.290140  407330 cri.go:89] found id: ""
	I1210 06:37:06.290155  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.290163  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:06.290168  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:06.290226  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:06.315796  407330 cri.go:89] found id: ""
	I1210 06:37:06.315811  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.315819  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:06.315826  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:06.315884  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:06.340906  407330 cri.go:89] found id: ""
	I1210 06:37:06.340920  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.340927  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:06.340932  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:06.340996  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:06.367812  407330 cri.go:89] found id: ""
	I1210 06:37:06.367827  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.367835  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:06.367840  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:06.367899  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:06.401044  407330 cri.go:89] found id: ""
	I1210 06:37:06.401058  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.401065  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:06.401070  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:06.401166  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:06.438778  407330 cri.go:89] found id: ""
	I1210 06:37:06.438799  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.438806  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:06.438811  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:06.438892  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:06.466678  407330 cri.go:89] found id: ""
	I1210 06:37:06.466692  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.466700  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:06.466708  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:06.466718  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:06.544177  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:06.544200  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:06.573010  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:06.573027  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:06.640533  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:06.640553  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:06.656110  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:06.656128  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:06.723670  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:06.715242   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.715893   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.717714   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.718356   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.720062   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:06.715242   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.715893   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.717714   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.718356   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.720062   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:09.224405  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:09.234680  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:09.234741  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:09.260264  407330 cri.go:89] found id: ""
	I1210 06:37:09.260278  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.260285  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:09.260290  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:09.260348  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:09.285806  407330 cri.go:89] found id: ""
	I1210 06:37:09.285823  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.285830  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:09.285836  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:09.285899  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:09.315817  407330 cri.go:89] found id: ""
	I1210 06:37:09.315832  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.315840  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:09.315845  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:09.315901  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:09.346059  407330 cri.go:89] found id: ""
	I1210 06:37:09.346074  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.346081  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:09.346087  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:09.346144  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:09.381275  407330 cri.go:89] found id: ""
	I1210 06:37:09.381290  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.381297  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:09.381303  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:09.381366  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:09.414891  407330 cri.go:89] found id: ""
	I1210 06:37:09.414905  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.414912  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:09.414918  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:09.414979  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:09.443742  407330 cri.go:89] found id: ""
	I1210 06:37:09.443757  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.443763  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:09.443771  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:09.443781  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:09.510740  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:09.510762  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:09.526338  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:09.526355  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:09.590739  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:09.582122   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.582824   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.584640   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.585274   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.587041   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:09.582122   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.582824   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.584640   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.585274   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.587041   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:09.590750  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:09.590762  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:09.668271  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:09.668292  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:12.200039  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:12.210520  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:12.210590  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:12.237060  407330 cri.go:89] found id: ""
	I1210 06:37:12.237075  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.237083  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:12.237088  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:12.237160  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:12.263263  407330 cri.go:89] found id: ""
	I1210 06:37:12.263277  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.263284  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:12.263290  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:12.263354  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:12.291756  407330 cri.go:89] found id: ""
	I1210 06:37:12.291772  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.291780  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:12.291785  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:12.291847  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:12.321162  407330 cri.go:89] found id: ""
	I1210 06:37:12.321177  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.321213  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:12.321218  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:12.321279  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:12.347025  407330 cri.go:89] found id: ""
	I1210 06:37:12.347039  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.347054  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:12.347060  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:12.347121  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:12.376035  407330 cri.go:89] found id: ""
	I1210 06:37:12.376050  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.376058  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:12.376064  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:12.376126  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:12.410703  407330 cri.go:89] found id: ""
	I1210 06:37:12.410717  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.410724  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:12.410733  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:12.410744  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:12.486662  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:12.486686  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:12.502236  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:12.502255  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:12.568662  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:12.560235   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.561423   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.562538   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.563321   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.564916   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:12.560235   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.561423   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.562538   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.563321   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.564916   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:12.568672  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:12.568683  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:12.645878  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:12.645901  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:15.177927  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:15.191193  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:15.191288  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:15.219881  407330 cri.go:89] found id: ""
	I1210 06:37:15.219896  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.219904  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:15.219911  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:15.219971  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:15.247528  407330 cri.go:89] found id: ""
	I1210 06:37:15.247544  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.247551  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:15.247557  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:15.247620  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:15.274888  407330 cri.go:89] found id: ""
	I1210 06:37:15.274903  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.274911  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:15.274920  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:15.274979  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:15.300280  407330 cri.go:89] found id: ""
	I1210 06:37:15.300295  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.300302  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:15.300308  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:15.300369  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:15.325424  407330 cri.go:89] found id: ""
	I1210 06:37:15.325438  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.325445  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:15.325450  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:15.325512  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:15.359467  407330 cri.go:89] found id: ""
	I1210 06:37:15.359482  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.359490  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:15.359495  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:15.359551  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:15.399967  407330 cri.go:89] found id: ""
	I1210 06:37:15.399982  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.399990  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:15.399998  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:15.400019  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:15.477621  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:15.477643  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:15.493123  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:15.493140  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:15.564193  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:15.554932   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.555848   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.557673   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.558388   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.560046   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:15.554932   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.555848   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.557673   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.558388   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.560046   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:15.564206  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:15.564216  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:15.640233  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:15.640254  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:18.174394  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:18.186025  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:18.186097  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:18.215781  407330 cri.go:89] found id: ""
	I1210 06:37:18.215795  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.215814  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:18.215819  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:18.215877  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:18.241012  407330 cri.go:89] found id: ""
	I1210 06:37:18.241033  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.241044  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:18.241054  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:18.241155  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:18.270058  407330 cri.go:89] found id: ""
	I1210 06:37:18.270072  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.270079  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:18.270090  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:18.270147  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:18.297554  407330 cri.go:89] found id: ""
	I1210 06:37:18.297576  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.297593  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:18.297603  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:18.297695  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:18.330116  407330 cri.go:89] found id: ""
	I1210 06:37:18.330130  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.330136  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:18.330142  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:18.330217  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:18.360475  407330 cri.go:89] found id: ""
	I1210 06:37:18.360489  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.360496  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:18.360502  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:18.360570  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:18.393014  407330 cri.go:89] found id: ""
	I1210 06:37:18.393028  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.393035  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:18.393043  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:18.393064  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:18.412466  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:18.412484  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:18.485431  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:18.477889   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.478765   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.479651   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.480367   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.481965   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:18.477889   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.478765   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.479651   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.480367   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.481965   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:18.485441  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:18.485452  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:18.561043  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:18.561064  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:18.588628  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:18.588644  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:21.156119  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:21.166481  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:21.166541  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:21.191589  407330 cri.go:89] found id: ""
	I1210 06:37:21.191604  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.191611  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:21.191625  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:21.191689  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:21.217715  407330 cri.go:89] found id: ""
	I1210 06:37:21.217730  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.217738  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:21.217744  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:21.217811  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:21.246916  407330 cri.go:89] found id: ""
	I1210 06:37:21.246930  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.246945  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:21.246950  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:21.247005  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:21.271644  407330 cri.go:89] found id: ""
	I1210 06:37:21.271659  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.271666  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:21.271672  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:21.271739  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:21.299971  407330 cri.go:89] found id: ""
	I1210 06:37:21.299985  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.299993  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:21.299998  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:21.300057  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:21.325497  407330 cri.go:89] found id: ""
	I1210 06:37:21.325512  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.325519  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:21.325524  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:21.325583  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:21.351049  407330 cri.go:89] found id: ""
	I1210 06:37:21.351064  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.351071  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:21.351079  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:21.351095  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:21.421855  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:21.421874  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:21.437324  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:21.437341  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:21.499548  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:21.490639   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.491333   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493043   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493634   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.495290   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:21.490639   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.491333   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493043   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493634   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.495290   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:21.499604  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:21.499615  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:21.576803  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:21.576824  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:24.110608  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:24.121006  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:24.121068  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:24.146461  407330 cri.go:89] found id: ""
	I1210 06:37:24.146476  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.146483  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:24.146488  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:24.146601  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:24.172866  407330 cri.go:89] found id: ""
	I1210 06:37:24.172882  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.172889  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:24.172894  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:24.172956  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:24.199448  407330 cri.go:89] found id: ""
	I1210 06:37:24.199463  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.199470  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:24.199475  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:24.199535  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:24.229234  407330 cri.go:89] found id: ""
	I1210 06:37:24.229250  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.229257  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:24.229263  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:24.229323  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:24.254311  407330 cri.go:89] found id: ""
	I1210 06:37:24.254326  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.254334  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:24.254339  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:24.254401  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:24.284029  407330 cri.go:89] found id: ""
	I1210 06:37:24.284044  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.284051  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:24.284056  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:24.284131  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:24.309694  407330 cri.go:89] found id: ""
	I1210 06:37:24.309708  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.309715  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:24.309724  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:24.309735  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:24.372553  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:24.363947   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.364695   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.366686   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.367278   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.368967   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:24.363947   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.364695   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.366686   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.367278   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.368967   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:24.372563  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:24.372575  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:24.464562  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:24.464585  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:24.493762  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:24.493778  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:24.563092  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:24.563113  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:27.078938  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:27.089277  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:27.089338  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:27.114399  407330 cri.go:89] found id: ""
	I1210 06:37:27.114413  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.114421  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:27.114427  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:27.114491  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:27.144680  407330 cri.go:89] found id: ""
	I1210 06:37:27.144695  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.144702  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:27.144707  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:27.144765  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:27.168950  407330 cri.go:89] found id: ""
	I1210 06:37:27.168965  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.168972  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:27.168977  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:27.169034  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:27.196136  407330 cri.go:89] found id: ""
	I1210 06:37:27.196151  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.196159  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:27.196164  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:27.196221  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:27.225403  407330 cri.go:89] found id: ""
	I1210 06:37:27.225418  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.225426  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:27.225432  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:27.225492  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:27.252922  407330 cri.go:89] found id: ""
	I1210 06:37:27.252938  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.252945  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:27.252950  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:27.253009  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:27.278155  407330 cri.go:89] found id: ""
	I1210 06:37:27.278169  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.278177  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:27.278185  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:27.278197  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:27.309557  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:27.309573  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:27.385911  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:27.385939  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:27.404671  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:27.404689  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:27.482019  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:27.473831   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.474734   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476086   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476735   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.478362   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:27.473831   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.474734   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476086   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476735   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.478362   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:27.482029  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:27.482040  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:30.059859  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:30.073120  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:30.073221  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:30.104876  407330 cri.go:89] found id: ""
	I1210 06:37:30.104902  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.104910  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:30.104915  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:30.104992  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:30.133968  407330 cri.go:89] found id: ""
	I1210 06:37:30.133984  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.133999  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:30.134007  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:30.134079  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:30.162870  407330 cri.go:89] found id: ""
	I1210 06:37:30.162888  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.162895  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:30.162901  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:30.162965  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:30.190402  407330 cri.go:89] found id: ""
	I1210 06:37:30.190416  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.190424  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:30.190429  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:30.190488  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:30.219884  407330 cri.go:89] found id: ""
	I1210 06:37:30.219913  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.219920  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:30.219926  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:30.219999  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:30.246737  407330 cri.go:89] found id: ""
	I1210 06:37:30.246752  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.246760  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:30.246765  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:30.246825  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:30.273326  407330 cri.go:89] found id: ""
	I1210 06:37:30.273340  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.273348  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:30.273356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:30.273366  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:30.350646  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:30.350667  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:30.385499  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:30.385515  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:30.461766  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:30.461790  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:30.477421  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:30.477438  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:30.539694  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:30.532297   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.532864   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534312   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534817   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.536259   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:30.532297   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.532864   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534312   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534817   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.536259   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:33.041379  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:33.052111  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:33.052178  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:33.080472  407330 cri.go:89] found id: ""
	I1210 06:37:33.080487  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.080494  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:33.080499  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:33.080556  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:33.107304  407330 cri.go:89] found id: ""
	I1210 06:37:33.107319  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.107326  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:33.107331  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:33.107389  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:33.133653  407330 cri.go:89] found id: ""
	I1210 06:37:33.133668  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.133675  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:33.133680  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:33.133740  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:33.159244  407330 cri.go:89] found id: ""
	I1210 06:37:33.159259  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.159266  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:33.159272  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:33.159328  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:33.185378  407330 cri.go:89] found id: ""
	I1210 06:37:33.185393  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.185402  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:33.185407  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:33.185466  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:33.210558  407330 cri.go:89] found id: ""
	I1210 06:37:33.210588  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.210609  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:33.210615  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:33.210672  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:33.235742  407330 cri.go:89] found id: ""
	I1210 06:37:33.235756  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.235773  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:33.235782  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:33.235796  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:33.303992  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:33.304010  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:33.321348  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:33.321367  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:33.396780  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:33.385824   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.386759   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.387788   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.388485   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.390532   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:33.385824   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.386759   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.387788   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.388485   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.390532   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:33.396789  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:33.396800  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:33.483704  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:33.483727  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:36.014717  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:36.026269  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:36.026331  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:36.054956  407330 cri.go:89] found id: ""
	I1210 06:37:36.054982  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.054989  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:36.054995  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:36.055055  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:36.081454  407330 cri.go:89] found id: ""
	I1210 06:37:36.081470  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.081477  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:36.081483  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:36.081544  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:36.112094  407330 cri.go:89] found id: ""
	I1210 06:37:36.112108  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.112116  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:36.112121  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:36.112181  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:36.138426  407330 cri.go:89] found id: ""
	I1210 06:37:36.138441  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.138448  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:36.138453  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:36.138512  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:36.164608  407330 cri.go:89] found id: ""
	I1210 06:37:36.164623  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.164630  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:36.164637  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:36.164693  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:36.192038  407330 cri.go:89] found id: ""
	I1210 06:37:36.192052  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.192059  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:36.192064  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:36.192124  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:36.221044  407330 cri.go:89] found id: ""
	I1210 06:37:36.221058  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.221065  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:36.221073  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:36.221085  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:36.250907  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:36.250923  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:36.316733  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:36.316753  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:36.332493  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:36.332509  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:36.412829  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:36.401482   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404020   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404535   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.405958   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.407122   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:36.401482   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404020   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404535   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.405958   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.407122   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:36.412843  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:36.412857  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:39.007236  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:39.020585  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:39.020658  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:39.046864  407330 cri.go:89] found id: ""
	I1210 06:37:39.046879  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.046886  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:39.046892  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:39.046954  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:39.076119  407330 cri.go:89] found id: ""
	I1210 06:37:39.076143  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.076152  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:39.076157  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:39.076226  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:39.102655  407330 cri.go:89] found id: ""
	I1210 06:37:39.102671  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.102678  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:39.102684  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:39.102746  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:39.128306  407330 cri.go:89] found id: ""
	I1210 06:37:39.128320  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.128327  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:39.128333  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:39.128407  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:39.156045  407330 cri.go:89] found id: ""
	I1210 06:37:39.156069  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.156076  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:39.156087  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:39.156156  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:39.183781  407330 cri.go:89] found id: ""
	I1210 06:37:39.183796  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.183804  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:39.183809  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:39.183867  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:39.209244  407330 cri.go:89] found id: ""
	I1210 06:37:39.209258  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.209266  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:39.209273  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:39.209294  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:39.274373  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:39.274392  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:39.289765  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:39.289782  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:39.353525  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:39.345986   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.346357   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348003   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348560   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.350004   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:39.345986   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.346357   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348003   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348560   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.350004   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:39.353537  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:39.353548  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:39.432803  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:39.432822  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:41.965778  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:41.979117  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:41.979179  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:42.015640  407330 cri.go:89] found id: ""
	I1210 06:37:42.015658  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.015683  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:42.015689  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:42.015759  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:42.048532  407330 cri.go:89] found id: ""
	I1210 06:37:42.048546  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.048553  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:42.048559  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:42.048618  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:42.076982  407330 cri.go:89] found id: ""
	I1210 06:37:42.076998  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.077006  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:42.077012  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:42.077084  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:42.112254  407330 cri.go:89] found id: ""
	I1210 06:37:42.112295  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.112304  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:42.112312  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:42.112393  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:42.150624  407330 cri.go:89] found id: ""
	I1210 06:37:42.150640  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.150647  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:42.150653  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:42.150718  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:42.180813  407330 cri.go:89] found id: ""
	I1210 06:37:42.180845  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.180854  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:42.180860  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:42.180927  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:42.212103  407330 cri.go:89] found id: ""
	I1210 06:37:42.212120  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.212129  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:42.212139  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:42.212151  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:42.228371  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:42.228388  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:42.298333  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:42.290091   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.290977   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.292784   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.293526   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.294529   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:42.290091   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.290977   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.292784   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.293526   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.294529   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:42.298344  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:42.298363  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:42.375054  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:42.375076  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:42.409015  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:42.409031  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:44.985261  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:44.995937  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:44.995997  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:45.074766  407330 cri.go:89] found id: ""
	I1210 06:37:45.074782  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.074790  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:45.074805  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:45.074874  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:45.130730  407330 cri.go:89] found id: ""
	I1210 06:37:45.130747  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.130755  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:45.130760  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:45.130828  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:45.169030  407330 cri.go:89] found id: ""
	I1210 06:37:45.169058  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.169067  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:45.169073  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:45.169157  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:45.215800  407330 cri.go:89] found id: ""
	I1210 06:37:45.215826  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.215835  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:45.215841  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:45.215915  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:45.274656  407330 cri.go:89] found id: ""
	I1210 06:37:45.274675  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.274684  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:45.274689  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:45.274771  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:45.313260  407330 cri.go:89] found id: ""
	I1210 06:37:45.313277  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.313290  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:45.313296  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:45.313418  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:45.347971  407330 cri.go:89] found id: ""
	I1210 06:37:45.347997  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.348005  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:45.348014  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:45.348028  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:45.381763  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:45.381780  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:45.462459  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:45.462482  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:45.477837  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:45.477854  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:45.547658  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:45.539217   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.540334   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.541688   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.542195   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.543964   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:45.539217   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.540334   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.541688   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.542195   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.543964   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:45.547669  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:45.547680  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:48.124454  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:48.134803  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:48.134866  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:48.162481  407330 cri.go:89] found id: ""
	I1210 06:37:48.162498  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.162507  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:48.162512  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:48.162572  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:48.192262  407330 cri.go:89] found id: ""
	I1210 06:37:48.192276  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.192283  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:48.192289  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:48.192350  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:48.220715  407330 cri.go:89] found id: ""
	I1210 06:37:48.220730  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.220737  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:48.220742  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:48.220802  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:48.244954  407330 cri.go:89] found id: ""
	I1210 06:37:48.244968  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.244976  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:48.244981  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:48.245040  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:48.272316  407330 cri.go:89] found id: ""
	I1210 06:37:48.272330  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.272337  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:48.272343  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:48.272399  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:48.300204  407330 cri.go:89] found id: ""
	I1210 06:37:48.300219  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.300226  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:48.300232  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:48.300293  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:48.329747  407330 cri.go:89] found id: ""
	I1210 06:37:48.329762  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.329769  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:48.329777  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:48.329789  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:48.395638  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:48.395658  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:48.411092  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:48.411108  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:48.478819  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:48.470539   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.471330   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.472882   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.473423   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.475010   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:48.470539   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.471330   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.472882   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.473423   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.475010   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:48.478829  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:48.478841  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:48.556858  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:48.556880  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:51.087332  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:51.097952  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:51.098014  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:51.125310  407330 cri.go:89] found id: ""
	I1210 06:37:51.125325  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.125333  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:51.125345  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:51.125424  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:51.152518  407330 cri.go:89] found id: ""
	I1210 06:37:51.152533  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.152541  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:51.152547  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:51.152619  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:51.181199  407330 cri.go:89] found id: ""
	I1210 06:37:51.181214  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.181222  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:51.181233  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:51.181302  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:51.211368  407330 cri.go:89] found id: ""
	I1210 06:37:51.211382  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.211399  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:51.211405  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:51.211473  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:51.240371  407330 cri.go:89] found id: ""
	I1210 06:37:51.240386  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.240413  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:51.240420  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:51.240493  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:51.266983  407330 cri.go:89] found id: ""
	I1210 06:37:51.266998  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.267005  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:51.267010  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:51.267077  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:51.292392  407330 cri.go:89] found id: ""
	I1210 06:37:51.292417  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.292425  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:51.292433  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:51.292443  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:51.357098  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:51.357119  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:51.372292  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:51.372310  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:51.456874  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:51.448584   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.449513   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451286   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451619   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.453250   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:51.448584   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.449513   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451286   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451619   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.453250   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:51.456885  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:51.456896  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:51.532131  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:51.532155  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:54.070226  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:54.081032  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:54.081095  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:54.107855  407330 cri.go:89] found id: ""
	I1210 06:37:54.107871  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.107878  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:54.107884  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:54.107954  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:54.133470  407330 cri.go:89] found id: ""
	I1210 06:37:54.133484  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.133491  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:54.133496  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:54.133556  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:54.160836  407330 cri.go:89] found id: ""
	I1210 06:37:54.160851  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.160859  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:54.160864  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:54.160931  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:54.191664  407330 cri.go:89] found id: ""
	I1210 06:37:54.191679  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.191686  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:54.191692  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:54.191758  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:54.216267  407330 cri.go:89] found id: ""
	I1210 06:37:54.216280  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.216298  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:54.216303  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:54.216370  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:54.241369  407330 cri.go:89] found id: ""
	I1210 06:37:54.241383  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.241390  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:54.241395  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:54.241454  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:54.265711  407330 cri.go:89] found id: ""
	I1210 06:37:54.265725  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.265732  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:54.265740  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:54.265750  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:54.280292  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:54.280314  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:54.343110  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:54.335264   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.335904   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337479   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337979   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.339543   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:54.335264   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.335904   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337479   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337979   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.339543   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:54.343120  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:54.343131  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:54.421398  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:54.421417  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:54.457832  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:54.457849  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:57.030320  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:57.040862  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:57.040923  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:57.065817  407330 cri.go:89] found id: ""
	I1210 06:37:57.065832  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.065840  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:57.065845  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:57.065908  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:57.091828  407330 cri.go:89] found id: ""
	I1210 06:37:57.091842  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.091849  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:57.091855  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:57.091912  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:57.117033  407330 cri.go:89] found id: ""
	I1210 06:37:57.117047  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.117054  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:57.117060  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:57.117128  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:57.143007  407330 cri.go:89] found id: ""
	I1210 06:37:57.143021  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.143028  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:57.143034  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:57.143090  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:57.171364  407330 cri.go:89] found id: ""
	I1210 06:37:57.171379  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.171386  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:57.171391  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:57.171451  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:57.195695  407330 cri.go:89] found id: ""
	I1210 06:37:57.195723  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.195730  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:57.195736  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:57.195802  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:57.225018  407330 cri.go:89] found id: ""
	I1210 06:37:57.225033  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.225040  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:57.225049  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:57.225060  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:57.299878  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:57.291518   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.292410   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294182   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294649   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.296294   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:57.291518   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.292410   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294182   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294649   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.296294   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:57.299889  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:57.299899  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:57.377757  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:57.377778  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:57.420515  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:57.420531  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:57.493246  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:57.493267  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:00.010113  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:00.082560  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:00.082643  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:00.187405  407330 cri.go:89] found id: ""
	I1210 06:38:00.190377  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.190403  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:00.190413  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:00.190506  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:00.256368  407330 cri.go:89] found id: ""
	I1210 06:38:00.256395  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.256405  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:00.256411  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:00.256498  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:00.309570  407330 cri.go:89] found id: ""
	I1210 06:38:00.309587  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.309595  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:00.309602  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:00.309691  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:00.359167  407330 cri.go:89] found id: ""
	I1210 06:38:00.359184  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.359193  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:00.359199  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:00.359284  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:00.401533  407330 cri.go:89] found id: ""
	I1210 06:38:00.401549  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.401557  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:00.401562  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:00.401629  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:00.439769  407330 cri.go:89] found id: ""
	I1210 06:38:00.439784  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.439792  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:00.439797  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:00.439863  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:00.471369  407330 cri.go:89] found id: ""
	I1210 06:38:00.471384  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.471392  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:00.471400  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:00.471412  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:00.504494  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:00.504511  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:00.570722  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:00.570742  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:00.585662  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:00.585679  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:00.648503  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:00.640817   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.641687   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643282   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643584   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.645048   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:00.640817   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.641687   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643282   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643584   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.645048   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:00.648513  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:00.648524  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:03.225660  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:03.235918  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:03.235979  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:03.260969  407330 cri.go:89] found id: ""
	I1210 06:38:03.260984  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.260991  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:03.260996  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:03.261058  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:03.286700  407330 cri.go:89] found id: ""
	I1210 06:38:03.286714  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.286721  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:03.286726  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:03.286785  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:03.315672  407330 cri.go:89] found id: ""
	I1210 06:38:03.315686  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.315694  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:03.315699  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:03.315757  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:03.344486  407330 cri.go:89] found id: ""
	I1210 06:38:03.344501  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.344508  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:03.344517  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:03.344576  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:03.371038  407330 cri.go:89] found id: ""
	I1210 06:38:03.371052  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.371059  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:03.371064  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:03.371127  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:03.404397  407330 cri.go:89] found id: ""
	I1210 06:38:03.404412  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.404420  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:03.404425  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:03.404492  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:03.440935  407330 cri.go:89] found id: ""
	I1210 06:38:03.440949  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.440957  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:03.440965  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:03.440975  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:03.509589  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:03.509610  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:03.525492  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:03.525509  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:03.592907  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:03.584405   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.585238   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587039   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587639   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.589379   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:03.584405   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.585238   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587039   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587639   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.589379   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:03.592926  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:03.592938  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:03.669095  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:03.669114  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:06.198833  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:06.209381  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:06.209457  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:06.234410  407330 cri.go:89] found id: ""
	I1210 06:38:06.234424  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.234431  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:06.234437  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:06.234495  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:06.264001  407330 cri.go:89] found id: ""
	I1210 06:38:06.264016  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.264022  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:06.264028  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:06.264087  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:06.289353  407330 cri.go:89] found id: ""
	I1210 06:38:06.289367  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.289375  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:06.289380  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:06.289442  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:06.318627  407330 cri.go:89] found id: ""
	I1210 06:38:06.318643  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.318651  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:06.318656  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:06.318715  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:06.344169  407330 cri.go:89] found id: ""
	I1210 06:38:06.344183  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.344191  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:06.344196  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:06.344255  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:06.372255  407330 cri.go:89] found id: ""
	I1210 06:38:06.372270  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.372277  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:06.372283  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:06.372346  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:06.410561  407330 cri.go:89] found id: ""
	I1210 06:38:06.410575  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.410582  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:06.410590  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:06.410601  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:06.485685  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:06.485706  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:06.500886  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:06.500904  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:06.569054  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:06.561431   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.562119   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.563630   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.564134   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.565584   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:06.561431   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.562119   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.563630   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.564134   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.565584   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:06.569065  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:06.569078  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:06.650735  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:06.650760  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:09.182920  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:09.193744  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:09.193805  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:09.224238  407330 cri.go:89] found id: ""
	I1210 06:38:09.224253  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.224260  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:09.224265  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:09.224321  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:09.249812  407330 cri.go:89] found id: ""
	I1210 06:38:09.249827  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.249835  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:09.249840  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:09.249900  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:09.275012  407330 cri.go:89] found id: ""
	I1210 06:38:09.275025  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.275032  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:09.275037  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:09.275094  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:09.299472  407330 cri.go:89] found id: ""
	I1210 06:38:09.299500  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.299508  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:09.299513  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:09.299579  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:09.325485  407330 cri.go:89] found id: ""
	I1210 06:38:09.325499  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.325507  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:09.325512  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:09.325567  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:09.350568  407330 cri.go:89] found id: ""
	I1210 06:38:09.350582  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.350589  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:09.350594  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:09.350657  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:09.380510  407330 cri.go:89] found id: ""
	I1210 06:38:09.380524  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.380531  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:09.380548  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:09.380560  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:09.421824  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:09.421840  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:09.497738  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:09.497764  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:09.513692  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:09.513711  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:09.581478  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:09.573930   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.574589   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576111   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576487   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.577997   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:09.573930   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.574589   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576111   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576487   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.577997   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:09.581497  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:09.581507  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:12.158761  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:12.169119  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:12.169177  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:12.194655  407330 cri.go:89] found id: ""
	I1210 06:38:12.194670  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.194677  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:12.194683  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:12.194739  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:12.223200  407330 cri.go:89] found id: ""
	I1210 06:38:12.223216  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.223223  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:12.223228  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:12.223293  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:12.249017  407330 cri.go:89] found id: ""
	I1210 06:38:12.249032  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.249043  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:12.249049  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:12.249110  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:12.274392  407330 cri.go:89] found id: ""
	I1210 06:38:12.274407  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.274414  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:12.274420  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:12.274477  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:12.299224  407330 cri.go:89] found id: ""
	I1210 06:38:12.299238  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.299245  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:12.299250  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:12.299310  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:12.324356  407330 cri.go:89] found id: ""
	I1210 06:38:12.324370  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.324377  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:12.324383  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:12.324441  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:12.355846  407330 cri.go:89] found id: ""
	I1210 06:38:12.355876  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.355883  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:12.355892  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:12.355903  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:12.426588  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:12.426608  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:12.446044  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:12.446061  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:12.519015  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:12.508422   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.508965   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513107   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513691   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.515195   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:12.508422   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.508965   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513107   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513691   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.515195   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:12.519025  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:12.519036  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:12.595463  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:12.595494  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:15.126222  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:15.136973  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:15.137050  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:15.168527  407330 cri.go:89] found id: ""
	I1210 06:38:15.168542  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.168549  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:15.168554  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:15.168615  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:15.195472  407330 cri.go:89] found id: ""
	I1210 06:38:15.195488  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.195496  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:15.195501  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:15.195560  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:15.222272  407330 cri.go:89] found id: ""
	I1210 06:38:15.222286  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.222293  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:15.222298  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:15.222359  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:15.252445  407330 cri.go:89] found id: ""
	I1210 06:38:15.252460  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.252473  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:15.252479  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:15.252541  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:15.279037  407330 cri.go:89] found id: ""
	I1210 06:38:15.279056  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.279063  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:15.279069  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:15.279130  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:15.304272  407330 cri.go:89] found id: ""
	I1210 06:38:15.304287  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.304294  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:15.304299  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:15.304358  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:15.329937  407330 cri.go:89] found id: ""
	I1210 06:38:15.329951  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.329958  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:15.329965  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:15.329976  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:15.344908  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:15.344927  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:15.430525  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:15.420038   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.420859   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.422803   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.424594   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.426170   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:15.420038   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.420859   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.422803   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.424594   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.426170   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:15.430538  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:15.430549  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:15.506380  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:15.506403  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:15.535708  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:15.535725  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:18.102529  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:18.114363  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:18.114433  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:18.140986  407330 cri.go:89] found id: ""
	I1210 06:38:18.141000  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.141007  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:18.141012  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:18.141070  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:18.167798  407330 cri.go:89] found id: ""
	I1210 06:38:18.167812  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.167819  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:18.167827  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:18.167883  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:18.194514  407330 cri.go:89] found id: ""
	I1210 06:38:18.194539  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.194547  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:18.194553  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:18.194614  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:18.219929  407330 cri.go:89] found id: ""
	I1210 06:38:18.219943  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.219949  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:18.219955  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:18.220013  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:18.247728  407330 cri.go:89] found id: ""
	I1210 06:38:18.247742  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.247749  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:18.247755  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:18.247814  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:18.274948  407330 cri.go:89] found id: ""
	I1210 06:38:18.274963  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.274971  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:18.274976  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:18.275034  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:18.301159  407330 cri.go:89] found id: ""
	I1210 06:38:18.301173  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.301196  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:18.301204  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:18.301222  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:18.337936  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:18.337955  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:18.404135  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:18.404153  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:18.420644  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:18.420661  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:18.488180  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:18.479576   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.480035   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.481748   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.482513   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.484281   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:18.479576   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.480035   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.481748   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.482513   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.484281   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:18.488199  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:18.488210  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:21.064064  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:21.074224  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:21.074283  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:21.100332  407330 cri.go:89] found id: ""
	I1210 06:38:21.100347  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.100354  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:21.100359  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:21.100416  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:21.128496  407330 cri.go:89] found id: ""
	I1210 06:38:21.128511  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.128518  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:21.128523  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:21.128583  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:21.165661  407330 cri.go:89] found id: ""
	I1210 06:38:21.165675  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.165682  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:21.165687  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:21.165745  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:21.191177  407330 cri.go:89] found id: ""
	I1210 06:38:21.191191  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.191199  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:21.191204  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:21.191262  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:21.217247  407330 cri.go:89] found id: ""
	I1210 06:38:21.217263  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.217270  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:21.217275  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:21.217336  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:21.243649  407330 cri.go:89] found id: ""
	I1210 06:38:21.243663  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.243670  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:21.243675  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:21.243731  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:21.272574  407330 cri.go:89] found id: ""
	I1210 06:38:21.272589  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.272596  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:21.272604  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:21.272615  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:21.336563  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:21.328507   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.329001   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.330691   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.331320   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.332859   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:21.328507   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.329001   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.330691   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.331320   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.332859   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:21.336573  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:21.336583  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:21.419141  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:21.419163  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:21.452486  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:21.452504  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:21.518913  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:21.518934  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:24.035407  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:24.051364  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:24.051491  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:24.079890  407330 cri.go:89] found id: ""
	I1210 06:38:24.079905  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.079913  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:24.079918  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:24.079976  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:24.108058  407330 cri.go:89] found id: ""
	I1210 06:38:24.108072  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.108089  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:24.108094  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:24.108160  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:24.136304  407330 cri.go:89] found id: ""
	I1210 06:38:24.136318  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.136325  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:24.136331  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:24.136388  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:24.166784  407330 cri.go:89] found id: ""
	I1210 06:38:24.166805  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.166813  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:24.166819  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:24.166879  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:24.194254  407330 cri.go:89] found id: ""
	I1210 06:38:24.194270  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.194278  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:24.194283  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:24.194349  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:24.220032  407330 cri.go:89] found id: ""
	I1210 06:38:24.220046  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.220053  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:24.220058  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:24.220125  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:24.249252  407330 cri.go:89] found id: ""
	I1210 06:38:24.249267  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.249275  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:24.249282  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:24.249301  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:24.332782  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:24.332809  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:24.363293  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:24.363313  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:24.439310  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:24.439334  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:24.454866  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:24.454883  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:24.518646  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:24.510636   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.511199   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.512759   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.513269   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.514934   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:24.510636   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.511199   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.512759   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.513269   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.514934   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:27.018916  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:27.029680  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:27.029748  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:27.057853  407330 cri.go:89] found id: ""
	I1210 06:38:27.057868  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.057876  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:27.057881  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:27.057943  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:27.088489  407330 cri.go:89] found id: ""
	I1210 06:38:27.088504  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.088512  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:27.088517  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:27.088576  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:27.114135  407330 cri.go:89] found id: ""
	I1210 06:38:27.114150  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.114158  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:27.114163  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:27.114222  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:27.144417  407330 cri.go:89] found id: ""
	I1210 06:38:27.144431  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.144438  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:27.144443  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:27.144502  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:27.170599  407330 cri.go:89] found id: ""
	I1210 06:38:27.170613  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.170621  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:27.170626  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:27.170704  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:27.196493  407330 cri.go:89] found id: ""
	I1210 06:38:27.196508  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.196516  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:27.196521  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:27.196577  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:27.222440  407330 cri.go:89] found id: ""
	I1210 06:38:27.222455  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.222462  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:27.222469  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:27.222480  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:27.288558  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:27.288578  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:27.304274  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:27.304290  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:27.370398  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:27.361823   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.362522   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364129   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364518   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.366357   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:27.361823   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.362522   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364129   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364518   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.366357   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:27.370408  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:27.370419  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:27.458800  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:27.458821  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:29.988954  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:29.999798  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:29.999864  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:30.095338  407330 cri.go:89] found id: ""
	I1210 06:38:30.095356  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.095364  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:30.095370  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:30.095440  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:30.129132  407330 cri.go:89] found id: ""
	I1210 06:38:30.129148  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.129156  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:30.129162  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:30.129271  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:30.157101  407330 cri.go:89] found id: ""
	I1210 06:38:30.157117  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.157124  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:30.157130  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:30.157224  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:30.184791  407330 cri.go:89] found id: ""
	I1210 06:38:30.184806  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.184814  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:30.184819  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:30.184885  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:30.211932  407330 cri.go:89] found id: ""
	I1210 06:38:30.211958  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.211966  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:30.211971  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:30.212041  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:30.238373  407330 cri.go:89] found id: ""
	I1210 06:38:30.238398  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.238407  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:30.238413  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:30.238479  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:30.266144  407330 cri.go:89] found id: ""
	I1210 06:38:30.266159  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.266167  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:30.266176  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:30.266187  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:30.337549  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:30.337570  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:30.353715  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:30.353731  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:30.430797  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:30.422887   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.423661   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425295   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425615   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.427098   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:30.422887   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.423661   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425295   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425615   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.427098   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:30.430808  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:30.430821  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:30.510900  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:30.510921  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:33.040458  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:33.051069  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:33.051132  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:33.081117  407330 cri.go:89] found id: ""
	I1210 06:38:33.081131  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.081138  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:33.081144  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:33.081232  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:33.110972  407330 cri.go:89] found id: ""
	I1210 06:38:33.110986  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.110993  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:33.110998  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:33.111055  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:33.136083  407330 cri.go:89] found id: ""
	I1210 06:38:33.136098  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.136104  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:33.136110  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:33.136170  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:33.162539  407330 cri.go:89] found id: ""
	I1210 06:38:33.162554  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.162561  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:33.162567  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:33.162628  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:33.192025  407330 cri.go:89] found id: ""
	I1210 06:38:33.192039  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.192047  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:33.192053  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:33.192114  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:33.217529  407330 cri.go:89] found id: ""
	I1210 06:38:33.217544  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.217562  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:33.217568  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:33.217637  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:33.242901  407330 cri.go:89] found id: ""
	I1210 06:38:33.242916  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.242923  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:33.242931  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:33.242942  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:33.311877  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:33.311897  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:33.327423  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:33.327438  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:33.395423  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:33.386462   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.387346   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.388905   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.389556   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.391613   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:33.386462   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.387346   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.388905   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.389556   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.391613   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:33.395434  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:33.395444  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:33.477529  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:33.477551  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:36.008120  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:36.021683  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:36.021745  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:36.049460  407330 cri.go:89] found id: ""
	I1210 06:38:36.049475  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.049482  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:36.049487  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:36.049560  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:36.076929  407330 cri.go:89] found id: ""
	I1210 06:38:36.076944  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.076951  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:36.076956  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:36.077017  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:36.103193  407330 cri.go:89] found id: ""
	I1210 06:38:36.103208  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.103214  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:36.103219  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:36.103285  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:36.129995  407330 cri.go:89] found id: ""
	I1210 06:38:36.130009  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.130024  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:36.130029  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:36.130087  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:36.156753  407330 cri.go:89] found id: ""
	I1210 06:38:36.156781  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.156789  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:36.156794  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:36.156857  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:36.188439  407330 cri.go:89] found id: ""
	I1210 06:38:36.188453  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.188461  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:36.188466  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:36.188525  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:36.214278  407330 cri.go:89] found id: ""
	I1210 06:38:36.214293  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.214300  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:36.214309  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:36.214321  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:36.280730  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:36.280750  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:36.296203  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:36.296220  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:36.364197  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:36.355593   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.356352   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358159   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358872   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.360561   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:36.355593   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.356352   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358159   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358872   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.360561   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:36.364209  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:36.364222  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:36.458076  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:36.458097  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:38.987911  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:38.998557  407330 kubeadm.go:602] duration metric: took 4m3.870918207s to restartPrimaryControlPlane
	W1210 06:38:38.998620  407330 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 06:38:38.998704  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 06:38:39.409934  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:38:39.423184  407330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:38:39.431304  407330 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:38:39.431358  407330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:38:39.439341  407330 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:38:39.439350  407330 kubeadm.go:158] found existing configuration files:
	
	I1210 06:38:39.439401  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:38:39.447538  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:38:39.447592  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:38:39.454886  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:38:39.462719  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:38:39.462778  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:38:39.470357  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:38:39.477894  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:38:39.477950  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:38:39.485341  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:38:39.493235  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:38:39.493292  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:38:39.500743  407330 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:38:39.538320  407330 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:38:39.538555  407330 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:38:39.610131  407330 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:38:39.610196  407330 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:38:39.610230  407330 kubeadm.go:319] OS: Linux
	I1210 06:38:39.610281  407330 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:38:39.610328  407330 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:38:39.610374  407330 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:38:39.610421  407330 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:38:39.610468  407330 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:38:39.610517  407330 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:38:39.610561  407330 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:38:39.610608  407330 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:38:39.610653  407330 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:38:39.676087  407330 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:38:39.676189  407330 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:38:39.676279  407330 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:38:39.683789  407330 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:38:39.689387  407330 out.go:252]   - Generating certificates and keys ...
	I1210 06:38:39.689490  407330 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:38:39.689554  407330 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:38:39.689629  407330 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:38:39.689689  407330 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:38:39.689759  407330 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:38:39.689811  407330 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:38:39.689904  407330 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:38:39.689978  407330 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:38:39.690060  407330 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:38:39.690139  407330 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:38:39.690176  407330 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:38:39.690241  407330 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:38:40.131783  407330 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:38:40.503719  407330 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:38:40.658362  407330 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:38:41.256208  407330 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:38:41.407412  407330 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:38:41.408125  407330 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:38:41.410853  407330 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:38:41.414436  407330 out.go:252]   - Booting up control plane ...
	I1210 06:38:41.414546  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:38:41.414623  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:38:41.414696  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:38:41.431657  407330 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:38:41.431964  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:38:41.440211  407330 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:38:41.440329  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:38:41.440568  407330 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:38:41.565122  407330 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:38:41.565287  407330 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:42:41.565436  407330 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000253721s
	I1210 06:42:41.565465  407330 kubeadm.go:319] 
	I1210 06:42:41.565522  407330 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:42:41.565554  407330 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:42:41.565658  407330 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:42:41.565663  407330 kubeadm.go:319] 
	I1210 06:42:41.565766  407330 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:42:41.565797  407330 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:42:41.565827  407330 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:42:41.565830  407330 kubeadm.go:319] 
	I1210 06:42:41.570718  407330 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:42:41.571209  407330 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:42:41.571330  407330 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:42:41.571595  407330 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:42:41.571607  407330 kubeadm.go:319] 
	I1210 06:42:41.571752  407330 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:42:41.571857  407330 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000253721s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:42:41.571950  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 06:42:41.983114  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:42:41.996619  407330 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:42:41.996677  407330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:42:42.015710  407330 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:42:42.015721  407330 kubeadm.go:158] found existing configuration files:
	
	I1210 06:42:42.015783  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:42:42.031380  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:42:42.031448  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:42:42.040300  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:42:42.049113  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:42:42.049177  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:42:42.057272  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:42:42.066509  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:42:42.066573  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:42:42.076663  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:42:42.086749  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:42:42.086829  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:42:42.096582  407330 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:42:42.144385  407330 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:42:42.144469  407330 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:42:42.248727  407330 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:42:42.248801  407330 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:42:42.248835  407330 kubeadm.go:319] OS: Linux
	I1210 06:42:42.248888  407330 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:42:42.248946  407330 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:42:42.249004  407330 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:42:42.249052  407330 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:42:42.249117  407330 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:42:42.249198  407330 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:42:42.249245  407330 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:42:42.249306  407330 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:42:42.249359  407330 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:42:42.316721  407330 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:42:42.316825  407330 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:42:42.316916  407330 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:42:42.325666  407330 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:42:42.330985  407330 out.go:252]   - Generating certificates and keys ...
	I1210 06:42:42.331095  407330 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:42:42.331182  407330 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:42:42.331258  407330 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:42:42.331331  407330 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:42:42.331424  407330 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:42:42.331487  407330 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:42:42.331560  407330 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:42:42.331637  407330 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:42:42.331721  407330 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:42:42.331801  407330 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:42:42.331847  407330 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:42:42.331912  407330 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:42:42.541750  407330 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:42:43.048349  407330 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:42:43.167759  407330 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:42:43.323314  407330 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:42:43.407090  407330 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:42:43.408333  407330 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:42:43.412234  407330 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:42:43.415621  407330 out.go:252]   - Booting up control plane ...
	I1210 06:42:43.415734  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:42:43.415811  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:42:43.416436  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:42:43.431439  407330 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:42:43.431813  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:42:43.438586  407330 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:42:43.438900  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:42:43.438951  407330 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:42:43.563199  407330 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:42:43.563333  407330 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:46:43.563419  407330 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000308988s
	I1210 06:46:43.563446  407330 kubeadm.go:319] 
	I1210 06:46:43.563502  407330 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:46:43.563534  407330 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:46:43.563637  407330 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:46:43.563641  407330 kubeadm.go:319] 
	I1210 06:46:43.563744  407330 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:46:43.563775  407330 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:46:43.563804  407330 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:46:43.563807  407330 kubeadm.go:319] 
	I1210 06:46:43.567965  407330 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:46:43.568389  407330 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:46:43.568496  407330 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:46:43.568730  407330 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:46:43.568734  407330 kubeadm.go:319] 
	I1210 06:46:43.568801  407330 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:46:43.568851  407330 kubeadm.go:403] duration metric: took 12m8.481939807s to StartCluster
	I1210 06:46:43.568881  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:46:43.568941  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:46:43.595798  407330 cri.go:89] found id: ""
	I1210 06:46:43.595831  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.595854  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:46:43.595860  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:46:43.595925  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:46:43.621092  407330 cri.go:89] found id: ""
	I1210 06:46:43.621107  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.621114  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:46:43.621123  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:46:43.621181  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:46:43.646506  407330 cri.go:89] found id: ""
	I1210 06:46:43.646520  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.646528  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:46:43.646533  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:46:43.646593  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:46:43.671975  407330 cri.go:89] found id: ""
	I1210 06:46:43.671990  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.671997  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:46:43.672003  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:46:43.672059  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:46:43.698910  407330 cri.go:89] found id: ""
	I1210 06:46:43.698925  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.698932  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:46:43.698937  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:46:43.698997  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:46:43.727644  407330 cri.go:89] found id: ""
	I1210 06:46:43.727660  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.727667  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:46:43.727672  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:46:43.727732  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:46:43.752849  407330 cri.go:89] found id: ""
	I1210 06:46:43.752864  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.752871  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:46:43.752879  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:46:43.752889  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:46:43.818161  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:46:43.818181  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:46:43.833400  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:46:43.833417  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:46:43.902591  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:46:43.894870   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.895546   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897128   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897673   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.899158   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:46:43.894870   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.895546   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897128   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897673   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.899158   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:46:43.902602  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:46:43.902614  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:46:43.975424  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:46:43.975445  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 06:46:44.022327  407330 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:46:44.022377  407330 out.go:285] * 
	W1210 06:46:44.022442  407330 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:46:44.022452  407330 out.go:285] * 
	W1210 06:46:44.024584  407330 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:46:44.031496  407330 out.go:203] 
	W1210 06:46:44.034389  407330 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:46:44.034453  407330 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:46:44.034475  407330 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:46:44.037811  407330 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914305234Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914347581Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914410941Z" level=info msg="Create NRI interface"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914519907Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914528243Z" level=info msg="runtime interface created"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914540707Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914547246Z" level=info msg="runtime interface starting up..."
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914553523Z" level=info msg="starting plugins..."
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914566389Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914635518Z" level=info msg="No systemd watchdog enabled"
	Dec 10 06:34:32 functional-253997 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.679749304Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=256aed1f-deb7-4ef3-85cd-131eefce5f31 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.680508073Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=d66c85ac-bdac-47c8-b0cb-0b9c6495c2c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.681012677Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=9d08e49c-548c-44b3-98b1-7f3a5851a031 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.681572306Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=0bc6e3be-4b4d-4362-bc99-b8372d06365e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.681969496Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2f86c405-f63c-4d07-a2ec-618b9449eabe name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.682410707Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f71d0106-3216-4008-9111-b1a84be0126f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.682849883Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=c187c18f-0638-4353-a242-3d51d64c2a33 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.320018913Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=2592a99c-52fd-46d2-9cab-44207ad0e0c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.321028729Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=f6ff0057-6b99-4df5-9aca-bca9adc94791 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.321868157Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=9722b5ff-a4b3-445c-b3c4-2f4e55341b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.322453627Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=b61d672b-0949-4316-8a7f-4071c86b03d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.322949111Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=450c82e8-dfb9-4142-8b40-08c4b80c0ef4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.32354821Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=0920e0ed-8fd6-46c5-8404-88b74f388f67 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.324043932Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=51273fc0-dd5c-4357-b908-13dc96e1efa4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:46:45.375144   21845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:45.375751   21845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:45.381601   21845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:45.382226   21845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:45.383728   21845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 03:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015165] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514331] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032961] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807794] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.310854] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 04:51] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000001f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000bd7dad86{9P.session} n=00000000db0bc116
	[  +0.001142] FS-Cache: O-key=[10] '34323936323939323530'
	[  +0.000773] FS-Cache: N-cookie c=00000020 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000bd7dad86{9P.session} n=00000000040fb2e3
	[  +0.001079] FS-Cache: N-key=[10] '34323936323939323530'
	[Dec10 04:56] hrtimer: interrupt took 8286122 ns
	[Dec10 06:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 06:10] overlayfs: idmapped layers are currently not supported
	[ +21.611451] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 06:15] overlayfs: idmapped layers are currently not supported
	[Dec10 06:16] overlayfs: idmapped layers are currently not supported
	[Dec10 06:19] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:46:45 up  3:29,  0 user,  load average: 0.09, 0.13, 0.44
	Linux functional-253997 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:46:42 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:46:43 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 639.
	Dec 10 06:46:43 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:46:43 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:46:43 functional-253997 kubelet[21654]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:46:43 functional-253997 kubelet[21654]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:46:43 functional-253997 kubelet[21654]: E1210 06:46:43.430074   21654 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:46:43 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:46:43 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:46:44 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 640.
	Dec 10 06:46:44 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:46:44 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:46:44 functional-253997 kubelet[21740]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:46:44 functional-253997 kubelet[21740]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:46:44 functional-253997 kubelet[21740]: E1210 06:46:44.186874   21740 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:46:44 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:46:44 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:46:44 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 641.
	Dec 10 06:46:44 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:46:44 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:46:44 functional-253997 kubelet[21766]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:46:44 functional-253997 kubelet[21766]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:46:44 functional-253997 kubelet[21766]: E1210 06:46:44.929593   21766 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:46:44 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:46:44 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997: exit status 2 (375.188842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-253997" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (737.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (2.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-253997 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-253997 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (63.877916ms)

                                                
                                                
** stderr ** 
	E1210 06:46:46.460151  419406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:46:46.461893  419406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:46:46.463708  419406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:46:46.465279  419406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:46:46.466785  419406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-253997 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-253997
helpers_test.go:244: (dbg) docker inspect functional-253997:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	        "Created": "2025-12-10T06:19:33.832297734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 395175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:19:33.914699563Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hosts",
	        "LogPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7-json.log",
	        "Name": "/functional-253997",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-253997:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-253997",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	                "LowerDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-253997",
	                "Source": "/var/lib/docker/volumes/functional-253997/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-253997",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-253997",
	                "name.minikube.sigs.k8s.io": "functional-253997",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "484c42edbd556e17e2c5a836571d3275bc9cbd157b70c100e08a5b73322a62e6",
	            "SandboxKey": "/var/run/docker/netns/484c42edbd55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-253997": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ba:52:c5:6e:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fc5aadc020d5981cfdab05726de722ae81d653cbc30b66014568069657370fe7",
	                    "EndpointID": "9ecd4882ea5c9fe5581b68ad15a0a2f450e34777aee426fcc158008c81076e5a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-253997",
	                        "256e059cdf1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997: exit status 2 (319.597009ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-013831 image build -t localhost/my-image:functional-013831 testdata/build --alsologtostderr                                          │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls --format table --alsologtostderr                                                                                     │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ update-context │ functional-013831 update-context --alsologtostderr -v=2                                                                                         │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ image          │ functional-013831 image ls                                                                                                                      │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ delete         │ -p functional-013831                                                                                                                            │ functional-013831 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start          │ -p functional-253997 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ start          │ -p functional-253997 --alsologtostderr -v=8                                                                                                     │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:28 UTC │                     │
	│ cache          │ functional-253997 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache add registry.k8s.io/pause:latest                                                                                        │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache add minikube-local-cache-test:functional-253997                                                                         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ functional-253997 cache delete minikube-local-cache-test:functional-253997                                                                      │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl images                                                                                                        │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │                     │
	│ cache          │ functional-253997 cache reload                                                                                                                  │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh            │ functional-253997 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ kubectl        │ functional-253997 kubectl -- --context functional-253997 get pods                                                                               │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │                     │
	│ start          │ -p functional-253997 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                        │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:34:29
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:34:29.186876  407330 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:34:29.187053  407330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:34:29.187058  407330 out.go:374] Setting ErrFile to fd 2...
	I1210 06:34:29.187062  407330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:34:29.187341  407330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:34:29.187713  407330 out.go:368] Setting JSON to false
	I1210 06:34:29.188576  407330 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11822,"bootTime":1765336648,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:34:29.188634  407330 start.go:143] virtualization:  
	I1210 06:34:29.192149  407330 out.go:179] * [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:34:29.195073  407330 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:34:29.195162  407330 notify.go:221] Checking for updates...
	I1210 06:34:29.200831  407330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:34:29.203909  407330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:34:29.206776  407330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:34:29.209617  407330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:34:29.212440  407330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:34:29.215839  407330 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:34:29.215937  407330 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:34:29.239404  407330 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:34:29.239516  407330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:34:29.302303  407330 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:34:29.292878865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:34:29.302405  407330 docker.go:319] overlay module found
	I1210 06:34:29.305588  407330 out.go:179] * Using the docker driver based on existing profile
	I1210 06:34:29.308369  407330 start.go:309] selected driver: docker
	I1210 06:34:29.308379  407330 start.go:927] validating driver "docker" against &{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:34:29.308484  407330 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:34:29.308590  407330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:34:29.367055  407330 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:34:29.35802689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:34:29.367451  407330 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:34:29.367476  407330 cni.go:84] Creating CNI manager for ""
	I1210 06:34:29.367527  407330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:34:29.367575  407330 start.go:353] cluster config:
	{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:34:29.370834  407330 out.go:179] * Starting "functional-253997" primary control-plane node in "functional-253997" cluster
	I1210 06:34:29.373779  407330 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:34:29.376601  407330 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:34:29.379406  407330 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:34:29.379504  407330 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:34:29.398798  407330 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:34:29.398809  407330 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:34:29.439425  407330 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 06:34:29.641198  407330 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 06:34:29.641344  407330 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/config.json ...
	I1210 06:34:29.641548  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:29.641601  407330 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:34:29.641630  407330 start.go:360] acquireMachinesLock for functional-253997: {Name:mkd4a204596bf14d7530a6c5c103756527acdf26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:29.641675  407330 start.go:364] duration metric: took 26.355µs to acquireMachinesLock for "functional-253997"
	I1210 06:34:29.641688  407330 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:34:29.641692  407330 fix.go:54] fixHost starting: 
	I1210 06:34:29.641950  407330 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:34:29.660018  407330 fix.go:112] recreateIfNeeded on functional-253997: state=Running err=<nil>
	W1210 06:34:29.660039  407330 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:34:29.663260  407330 out.go:252] * Updating the running docker "functional-253997" container ...
	I1210 06:34:29.663287  407330 machine.go:94] provisionDockerMachine start ...
	I1210 06:34:29.663366  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:29.683378  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:29.683692  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:29.683698  407330 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:34:29.821832  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:29.837224  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:34:29.837239  407330 ubuntu.go:182] provisioning hostname "functional-253997"
	I1210 06:34:29.837320  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:29.868971  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:29.869301  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:29.869310  407330 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-253997 && echo "functional-253997" | sudo tee /etc/hostname
	I1210 06:34:29.986840  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:30.112009  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:34:30.112104  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:30.132596  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:30.132908  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:30.132923  407330 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-253997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-253997/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-253997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:34:30.208840  407330 cache.go:107] acquiring lock: {Name:mk9268471d41d450748d4b0133d1a043378776f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.208835  407330 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.208914  407330 cache.go:107] acquiring lock: {Name:mk8250dc655f821bf2674ed77a7683798ede4f4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.208957  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:34:30.208967  407330 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 138.989µs
	I1210 06:34:30.208975  407330 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:34:30.208986  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:34:30.209001  407330 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 97.733µs
	I1210 06:34:30.208999  407330 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209007  407330 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:34:30.209031  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:34:30.209036  407330 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.599µs
	I1210 06:34:30.209024  407330 cache.go:107] acquiring lock: {Name:mk1c0334e51772e4f6b3429cc05b91ec19d54b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209041  407330 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:34:30.209051  407330 cache.go:107] acquiring lock: {Name:mkc2831727d74e5fadb1c00bd89f0236b385200e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209067  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:34:30.209072  407330 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 53.268µs
	I1210 06:34:30.209089  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:34:30.209088  407330 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:34:30.209095  407330 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 45.753µs
	I1210 06:34:30.209100  407330 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:34:30.209108  407330 cache.go:107] acquiring lock: {Name:mk7e052900e41419dc78d3311f27db508a8f64b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209102  407330 cache.go:107] acquiring lock: {Name:mk41aaba55a3296a937cf380d44dc9920df69546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209134  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:34:30.209138  407330 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 31.27µs
	I1210 06:34:30.209143  407330 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:34:30.209145  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:34:30.209151  407330 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 50.536µs
	I1210 06:34:30.209155  407330 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:34:30.209160  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:34:30.209163  407330 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 351.676µs
	I1210 06:34:30.209168  407330 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:34:30.209180  407330 cache.go:87] Successfully saved all images to host disk.
	I1210 06:34:30.290041  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:34:30.290057  407330 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-362392/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-362392/.minikube}
	I1210 06:34:30.290077  407330 ubuntu.go:190] setting up certificates
	I1210 06:34:30.290086  407330 provision.go:84] configureAuth start
	I1210 06:34:30.290163  407330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:34:30.308042  407330 provision.go:143] copyHostCerts
	I1210 06:34:30.308132  407330 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem, removing ...
	I1210 06:34:30.308140  407330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 06:34:30.308215  407330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem (1078 bytes)
	I1210 06:34:30.308356  407330 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem, removing ...
	I1210 06:34:30.308366  407330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 06:34:30.308393  407330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem (1123 bytes)
	I1210 06:34:30.308451  407330 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem, removing ...
	I1210 06:34:30.308454  407330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 06:34:30.308477  407330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem (1675 bytes)
	I1210 06:34:30.308526  407330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem org=jenkins.functional-253997 san=[127.0.0.1 192.168.49.2 functional-253997 localhost minikube]
	I1210 06:34:30.594902  407330 provision.go:177] copyRemoteCerts
	I1210 06:34:30.594965  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:34:30.595003  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:30.611740  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:30.721082  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:34:30.738821  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:34:30.756666  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:34:30.774292  407330 provision.go:87] duration metric: took 484.176925ms to configureAuth
	I1210 06:34:30.774310  407330 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:34:30.774512  407330 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:34:30.774629  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:30.792842  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:30.793168  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:30.793179  407330 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:34:31.164456  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:34:31.164470  407330 machine.go:97] duration metric: took 1.501175708s to provisionDockerMachine
	I1210 06:34:31.164497  407330 start.go:293] postStartSetup for "functional-253997" (driver="docker")
	I1210 06:34:31.164510  407330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:34:31.164571  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:34:31.164607  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.185147  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.293395  407330 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:34:31.296969  407330 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:34:31.296987  407330 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:34:31.296998  407330 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/addons for local assets ...
	I1210 06:34:31.297053  407330 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/files for local assets ...
	I1210 06:34:31.297133  407330 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> 3642652.pem in /etc/ssl/certs
	I1210 06:34:31.297238  407330 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts -> hosts in /etc/test/nested/copy/364265
	I1210 06:34:31.297285  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364265
	I1210 06:34:31.305181  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:34:31.324368  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts --> /etc/test/nested/copy/364265/hosts (40 bytes)
	I1210 06:34:31.342686  407330 start.go:296] duration metric: took 178.173087ms for postStartSetup
	I1210 06:34:31.342778  407330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:34:31.342817  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.360907  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.462708  407330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:34:31.467744  407330 fix.go:56] duration metric: took 1.826044535s for fixHost
	I1210 06:34:31.467760  407330 start.go:83] releasing machines lock for "functional-253997", held for 1.826077816s
	I1210 06:34:31.467840  407330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:34:31.485284  407330 ssh_runner.go:195] Run: cat /version.json
	I1210 06:34:31.485341  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.485360  407330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:34:31.485410  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.504331  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.505583  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.702850  407330 ssh_runner.go:195] Run: systemctl --version
	I1210 06:34:31.710100  407330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:34:31.751135  407330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:34:31.755552  407330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:34:31.755612  407330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:34:31.763681  407330 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:34:31.763695  407330 start.go:496] detecting cgroup driver to use...
	I1210 06:34:31.763726  407330 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:34:31.763773  407330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:34:31.779177  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:34:31.792657  407330 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:34:31.792726  407330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:34:31.808481  407330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:34:31.821835  407330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:34:31.953412  407330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:34:32.070663  407330 docker.go:234] disabling docker service ...
	I1210 06:34:32.070719  407330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:34:32.089582  407330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:34:32.103903  407330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:34:32.229247  407330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:34:32.354550  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:34:32.368208  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:34:32.383037  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:32.544686  407330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:34:32.544766  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.554538  407330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:34:32.554607  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.563600  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.572445  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.581785  407330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:34:32.589992  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.599257  407330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.607809  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.616790  407330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:34:32.624404  407330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:34:32.631884  407330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:34:32.742959  407330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:34:32.924926  407330 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:34:32.925015  407330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:34:32.931953  407330 start.go:564] Will wait 60s for crictl version
	I1210 06:34:32.932037  407330 ssh_runner.go:195] Run: which crictl
	I1210 06:34:32.936975  407330 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:34:32.972701  407330 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:34:32.972786  407330 ssh_runner.go:195] Run: crio --version
	I1210 06:34:33.008288  407330 ssh_runner.go:195] Run: crio --version
	I1210 06:34:33.045101  407330 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 06:34:33.048270  407330 cli_runner.go:164] Run: docker network inspect functional-253997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:34:33.065511  407330 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:34:33.072736  407330 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 06:34:33.075695  407330 kubeadm.go:884] updating cluster {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:34:33.075981  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:33.225944  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:33.376252  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:33.530247  407330 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:34:33.530325  407330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:34:33.568941  407330 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:34:33.568954  407330 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:34:33.568960  407330 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1210 06:34:33.569060  407330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-253997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:34:33.569145  407330 ssh_runner.go:195] Run: crio config
	I1210 06:34:33.643186  407330 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 06:34:33.643211  407330 cni.go:84] Creating CNI manager for ""
	I1210 06:34:33.643224  407330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:34:33.643242  407330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:34:33.643280  407330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-253997 NodeName:functional-253997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:34:33.643429  407330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-253997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:34:33.643524  407330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:34:33.653419  407330 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:34:33.653495  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:34:33.663141  407330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 06:34:33.678587  407330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:34:33.693949  407330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I1210 06:34:33.710464  407330 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:34:33.714723  407330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:34:33.827439  407330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:34:34.376520  407330 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997 for IP: 192.168.49.2
	I1210 06:34:34.376531  407330 certs.go:195] generating shared ca certs ...
	I1210 06:34:34.376561  407330 certs.go:227] acquiring lock for ca certs: {Name:mk6fdc2cadbd147112797261b2432f4e1e90b685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:34:34.376695  407330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key
	I1210 06:34:34.376739  407330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key
	I1210 06:34:34.376746  407330 certs.go:257] generating profile certs ...
	I1210 06:34:34.376830  407330 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key
	I1210 06:34:34.376883  407330 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key.d56e9423
	I1210 06:34:34.376918  407330 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key
	I1210 06:34:34.377046  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem (1338 bytes)
	W1210 06:34:34.377076  407330 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265_empty.pem, impossibly tiny 0 bytes
	I1210 06:34:34.377083  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:34:34.377112  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:34:34.377138  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:34:34.377165  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem (1675 bytes)
	I1210 06:34:34.377235  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:34:34.377907  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:34:34.400957  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:34:34.422626  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:34:34.444886  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:34:34.463194  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:34:34.485380  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:34:34.504994  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:34:34.523903  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:34:34.542693  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem --> /usr/share/ca-certificates/364265.pem (1338 bytes)
	I1210 06:34:34.560781  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /usr/share/ca-certificates/3642652.pem (1708 bytes)
	I1210 06:34:34.580039  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:34:34.598952  407330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:34:34.612103  407330 ssh_runner.go:195] Run: openssl version
	I1210 06:34:34.618607  407330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.626715  407330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3642652.pem /etc/ssl/certs/3642652.pem
	I1210 06:34:34.634462  407330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.638500  407330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.638572  407330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.680023  407330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:34:34.687891  407330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.695733  407330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:34:34.704338  407330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.708573  407330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.708632  407330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.750214  407330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:34:34.758402  407330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.766563  407330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364265.pem /etc/ssl/certs/364265.pem
	I1210 06:34:34.774837  407330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.779114  407330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.779177  407330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.821136  407330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:34:34.829270  407330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:34:34.833529  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:34:34.876277  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:34:34.917707  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:34:34.959457  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:34:35.001865  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:34:35.044914  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:34:35.086921  407330 kubeadm.go:401] StartCluster: {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:34:35.087016  407330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:34:35.087089  407330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:34:35.117459  407330 cri.go:89] found id: ""
	I1210 06:34:35.117522  407330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:34:35.127607  407330 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:34:35.127629  407330 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:34:35.127685  407330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:34:35.136902  407330 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.137526  407330 kubeconfig.go:125] found "functional-253997" server: "https://192.168.49.2:8441"
	I1210 06:34:35.138779  407330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:34:35.148051  407330 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 06:19:55.285285887 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 06:34:33.703709051 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 06:34:35.148070  407330 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:34:35.148082  407330 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 06:34:35.148140  407330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:34:35.178671  407330 cri.go:89] found id: ""
	I1210 06:34:35.178737  407330 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:34:35.196838  407330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:34:35.205412  407330 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 10 06:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 10 06:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 06:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 10 06:24 /etc/kubernetes/scheduler.conf
	
	I1210 06:34:35.205484  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:34:35.213947  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:34:35.222529  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.222599  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:34:35.230587  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:34:35.239174  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.239260  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:34:35.247436  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:34:35.255726  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.255785  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:34:35.264394  407330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:34:35.273245  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:35.319550  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.241705  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.453815  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.521107  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.566051  407330 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:34:36.566126  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:37.067292  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:37.566512  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:38.066836  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:38.566899  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:39.066341  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:39.566346  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:40.066332  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:40.566372  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:41.066499  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:41.566268  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:42.066346  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:42.567303  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:43.066665  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:43.567003  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:44.067024  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:44.566335  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:45.066417  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:45.567077  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:46.066880  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:46.567080  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:47.067184  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:47.567178  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:48.066963  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:48.566974  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:49.067037  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:49.566287  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:50.066336  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:50.566364  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:51.067235  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:51.566986  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:52.067009  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:52.567206  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:53.067261  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:53.566344  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:54.066310  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:54.566298  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:55.066264  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:55.567074  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:56.066263  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:56.566336  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:57.066335  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:57.566328  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:58.067273  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:58.566628  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:59.066382  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:59.566689  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:00.067148  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:00.566514  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:01.067178  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:01.566354  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:02.066731  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:02.566399  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:03.066319  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:03.566548  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:04.067174  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:04.566325  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:05.066402  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:05.566911  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:06.066322  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:06.566332  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:07.066357  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:07.566349  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:08.066401  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:08.566901  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:09.066304  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:09.566288  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:10.067048  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:10.566583  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:11.066369  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:11.566359  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:12.066308  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:12.566316  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:13.067242  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:13.566381  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:14.066924  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:14.566356  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:15.066288  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:15.566336  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:16.066320  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:16.566227  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:17.066312  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:17.567213  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:18.067248  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:18.566316  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:19.066386  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:19.566330  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:20.066351  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:20.567009  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:21.066262  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:21.566459  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:22.067279  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:22.567207  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:23.066320  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:23.566322  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:24.066326  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:24.567019  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:25.066297  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:25.566495  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:26.066321  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:26.566348  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:27.066383  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:27.566446  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:28.066328  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:28.566352  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:29.066994  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:29.566974  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:30.067021  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:30.566389  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:31.066477  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:31.567070  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:32.067017  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:32.566317  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:33.066608  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:33.566260  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:34.066340  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:34.566882  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:35.066828  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:35.566890  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:36.066318  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:36.566330  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:36.566414  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:36.592227  407330 cri.go:89] found id: ""
	I1210 06:35:36.592241  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.592248  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:36.592253  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:36.592312  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:36.622028  407330 cri.go:89] found id: ""
	I1210 06:35:36.622043  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.622051  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:36.622056  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:36.622116  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:36.648208  407330 cri.go:89] found id: ""
	I1210 06:35:36.648226  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.648234  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:36.648240  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:36.648298  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:36.674377  407330 cri.go:89] found id: ""
	I1210 06:35:36.674397  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.674405  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:36.674410  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:36.674471  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:36.699772  407330 cri.go:89] found id: ""
	I1210 06:35:36.699787  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.699794  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:36.699801  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:36.699864  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:36.724815  407330 cri.go:89] found id: ""
	I1210 06:35:36.724830  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.724838  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:36.724843  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:36.724900  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:36.750775  407330 cri.go:89] found id: ""
	I1210 06:35:36.750791  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.750798  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:36.750806  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:36.750820  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:36.820446  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:36.820465  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:36.835955  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:36.835970  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:36.903411  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:36.895055   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.895887   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.897562   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.898101   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.899639   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:36.895055   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.895887   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.897562   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.898101   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.899639   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:36.903424  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:36.903435  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:36.979747  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:36.979768  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:39.514581  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:39.524909  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:39.524970  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:39.550102  407330 cri.go:89] found id: ""
	I1210 06:35:39.550116  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.550124  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:39.550129  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:39.550187  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:39.576588  407330 cri.go:89] found id: ""
	I1210 06:35:39.576602  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.576619  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:39.576624  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:39.576690  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:39.603288  407330 cri.go:89] found id: ""
	I1210 06:35:39.603303  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.603310  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:39.603315  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:39.603373  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:39.632338  407330 cri.go:89] found id: ""
	I1210 06:35:39.632353  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.632360  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:39.632365  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:39.632420  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:39.657752  407330 cri.go:89] found id: ""
	I1210 06:35:39.657767  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.657773  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:39.657779  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:39.657844  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:39.683212  407330 cri.go:89] found id: ""
	I1210 06:35:39.683226  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.683234  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:39.683240  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:39.683300  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:39.708413  407330 cri.go:89] found id: ""
	I1210 06:35:39.708437  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.708445  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:39.708453  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:39.708464  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:39.775637  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:39.775659  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:39.791086  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:39.791102  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:39.857652  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:39.849379   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.849925   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.851713   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.852318   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.853965   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:39.849379   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.849925   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.851713   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.852318   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.853965   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:39.857663  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:39.857675  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:39.935547  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:39.935569  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:42.469375  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:42.480182  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:42.480240  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:42.506760  407330 cri.go:89] found id: ""
	I1210 06:35:42.506774  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.506781  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:42.506786  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:42.506843  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:42.536234  407330 cri.go:89] found id: ""
	I1210 06:35:42.536249  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.536256  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:42.536261  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:42.536329  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:42.566988  407330 cri.go:89] found id: ""
	I1210 06:35:42.567003  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.567010  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:42.567015  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:42.567076  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:42.592607  407330 cri.go:89] found id: ""
	I1210 06:35:42.592630  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.592638  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:42.592643  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:42.592709  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:42.617649  407330 cri.go:89] found id: ""
	I1210 06:35:42.617664  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.617671  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:42.617676  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:42.617734  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:42.643410  407330 cri.go:89] found id: ""
	I1210 06:35:42.643425  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.643432  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:42.643437  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:42.643503  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:42.669531  407330 cri.go:89] found id: ""
	I1210 06:35:42.669546  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.669553  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:42.669561  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:42.669571  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:42.735924  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:42.735944  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:42.751205  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:42.751229  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:42.816158  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:42.807932   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.808831   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810433   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810980   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.812516   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:42.807932   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.808831   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810433   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810980   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.812516   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:42.816169  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:42.816179  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:42.893021  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:42.893042  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:45.426224  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:45.438079  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:45.438148  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:45.472267  407330 cri.go:89] found id: ""
	I1210 06:35:45.472291  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.472299  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:45.472306  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:45.472384  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:45.502901  407330 cri.go:89] found id: ""
	I1210 06:35:45.502931  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.502939  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:45.502945  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:45.503008  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:45.529442  407330 cri.go:89] found id: ""
	I1210 06:35:45.529458  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.529465  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:45.529470  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:45.529534  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:45.555125  407330 cri.go:89] found id: ""
	I1210 06:35:45.555139  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.555159  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:45.555165  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:45.555243  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:45.580961  407330 cri.go:89] found id: ""
	I1210 06:35:45.580976  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.580994  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:45.580999  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:45.581057  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:45.610965  407330 cri.go:89] found id: ""
	I1210 06:35:45.610980  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.610987  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:45.610993  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:45.611059  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:45.637091  407330 cri.go:89] found id: ""
	I1210 06:35:45.637105  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.637120  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:45.637128  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:45.637137  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:45.715413  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:45.715435  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:45.749154  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:45.749171  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:45.815517  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:45.815543  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:45.831429  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:45.831446  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:45.906374  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:45.898629   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.899395   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.900929   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.901506   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.902971   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:45.898629   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.899395   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.900929   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.901506   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.902971   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:48.406578  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:48.421255  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:48.421324  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:48.447131  407330 cri.go:89] found id: ""
	I1210 06:35:48.447146  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.447153  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:48.447159  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:48.447220  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:48.473099  407330 cri.go:89] found id: ""
	I1210 06:35:48.473122  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.473129  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:48.473134  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:48.473222  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:48.498597  407330 cri.go:89] found id: ""
	I1210 06:35:48.498612  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.498619  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:48.498624  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:48.498681  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:48.523362  407330 cri.go:89] found id: ""
	I1210 06:35:48.523377  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.523384  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:48.523389  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:48.523453  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:48.551807  407330 cri.go:89] found id: ""
	I1210 06:35:48.551821  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.551835  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:48.551840  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:48.551900  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:48.581473  407330 cri.go:89] found id: ""
	I1210 06:35:48.581487  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.581502  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:48.581509  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:48.581565  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:48.607499  407330 cri.go:89] found id: ""
	I1210 06:35:48.607514  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.607521  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:48.607529  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:48.607539  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:48.673753  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:48.673774  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:48.688837  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:48.688853  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:48.751707  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:48.744029   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.744581   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746111   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746590   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.748085   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:48.744029   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.744581   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746111   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746590   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.748085   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:48.751717  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:48.751727  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:48.828663  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:48.828686  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:51.363003  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:51.376217  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:51.376312  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:51.407718  407330 cri.go:89] found id: ""
	I1210 06:35:51.407732  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.407755  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:51.407762  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:51.407874  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:51.444235  407330 cri.go:89] found id: ""
	I1210 06:35:51.444269  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.444286  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:51.444295  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:51.444379  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:51.474869  407330 cri.go:89] found id: ""
	I1210 06:35:51.474883  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.474890  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:51.474895  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:51.474953  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:51.504739  407330 cri.go:89] found id: ""
	I1210 06:35:51.504764  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.504772  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:51.504777  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:51.504846  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:51.532353  407330 cri.go:89] found id: ""
	I1210 06:35:51.532368  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.532375  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:51.532380  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:51.532455  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:51.557565  407330 cri.go:89] found id: ""
	I1210 06:35:51.557579  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.557586  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:51.557591  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:51.557661  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:51.583285  407330 cri.go:89] found id: ""
	I1210 06:35:51.583300  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.583307  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:51.583315  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:51.583325  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:51.613387  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:51.613404  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:51.680028  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:51.680049  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:51.695935  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:51.695952  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:51.759280  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:51.750909   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.751756   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753321   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753933   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.755583   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:51.750909   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.751756   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753321   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753933   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.755583   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:51.759290  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:51.759301  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:54.338519  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:54.348725  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:54.348780  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:54.383598  407330 cri.go:89] found id: ""
	I1210 06:35:54.383626  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.383634  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:54.383639  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:54.383707  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:54.410152  407330 cri.go:89] found id: ""
	I1210 06:35:54.410180  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.410187  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:54.410192  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:54.410264  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:54.438326  407330 cri.go:89] found id: ""
	I1210 06:35:54.438352  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.438360  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:54.438365  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:54.438441  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:54.465850  407330 cri.go:89] found id: ""
	I1210 06:35:54.465864  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.465871  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:54.465876  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:54.465931  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:54.491709  407330 cri.go:89] found id: ""
	I1210 06:35:54.491722  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.491729  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:54.491734  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:54.491790  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:54.523425  407330 cri.go:89] found id: ""
	I1210 06:35:54.523440  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.523447  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:54.523452  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:54.523548  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:54.550380  407330 cri.go:89] found id: ""
	I1210 06:35:54.550394  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.550411  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:54.550438  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:54.550449  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:54.582306  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:54.582324  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:54.647908  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:54.647927  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:54.663750  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:54.663772  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:54.730309  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:54.719958   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.720734   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.722315   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.724777   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.725551   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:54.719958   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.720734   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.722315   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.724777   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.725551   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:54.730320  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:54.730331  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:57.308665  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:57.320319  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:57.320392  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:57.345562  407330 cri.go:89] found id: ""
	I1210 06:35:57.345577  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.345584  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:57.345589  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:57.345647  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:57.371859  407330 cri.go:89] found id: ""
	I1210 06:35:57.371874  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.371897  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:57.371903  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:57.371970  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:57.406362  407330 cri.go:89] found id: ""
	I1210 06:35:57.406377  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.406384  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:57.406389  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:57.406463  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:57.436087  407330 cri.go:89] found id: ""
	I1210 06:35:57.436103  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.436110  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:57.436116  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:57.436187  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:57.465764  407330 cri.go:89] found id: ""
	I1210 06:35:57.465779  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.465786  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:57.465791  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:57.465867  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:57.494039  407330 cri.go:89] found id: ""
	I1210 06:35:57.494065  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.494073  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:57.494078  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:57.494145  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:57.520097  407330 cri.go:89] found id: ""
	I1210 06:35:57.520123  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.520131  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:57.520140  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:57.520151  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:57.586496  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:57.586517  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:57.602111  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:57.602128  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:57.668344  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:57.660356   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.660757   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.662627   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.663077   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.664745   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:57.660356   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.660757   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.662627   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.663077   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.664745   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:57.668356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:57.668367  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:57.746160  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:57.746183  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:00.275712  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:00.321874  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:00.321955  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:00.384327  407330 cri.go:89] found id: ""
	I1210 06:36:00.384343  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.384351  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:00.384357  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:00.384451  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:00.459817  407330 cri.go:89] found id: ""
	I1210 06:36:00.459834  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.459842  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:00.459848  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:00.459916  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:00.497674  407330 cri.go:89] found id: ""
	I1210 06:36:00.497690  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.497698  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:00.497704  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:00.497774  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:00.541499  407330 cri.go:89] found id: ""
	I1210 06:36:00.541516  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.541525  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:00.541531  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:00.541613  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:00.581412  407330 cri.go:89] found id: ""
	I1210 06:36:00.581436  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.581463  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:00.581468  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:00.581541  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:00.610779  407330 cri.go:89] found id: ""
	I1210 06:36:00.610795  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.610802  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:00.610807  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:00.610870  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:00.642543  407330 cri.go:89] found id: ""
	I1210 06:36:00.642559  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.642567  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:00.642575  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:00.642586  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:00.710346  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:00.710367  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:00.725875  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:00.725894  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:00.793058  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:00.784782   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.785552   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787181   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787496   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.789654   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:00.784782   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.785552   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787181   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787496   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.789654   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:00.793071  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:00.793084  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:00.875916  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:00.875944  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:03.406417  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:03.419044  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:03.419120  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:03.447628  407330 cri.go:89] found id: ""
	I1210 06:36:03.447658  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.447666  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:03.447671  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:03.447737  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:03.474253  407330 cri.go:89] found id: ""
	I1210 06:36:03.474266  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.474274  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:03.474279  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:03.474336  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:03.500678  407330 cri.go:89] found id: ""
	I1210 06:36:03.500694  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.500701  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:03.500707  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:03.500768  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:03.528282  407330 cri.go:89] found id: ""
	I1210 06:36:03.528298  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.528306  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:03.528311  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:03.528373  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:03.556656  407330 cri.go:89] found id: ""
	I1210 06:36:03.556670  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.556678  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:03.556683  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:03.556743  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:03.583735  407330 cri.go:89] found id: ""
	I1210 06:36:03.583750  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.583758  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:03.583763  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:03.583819  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:03.609076  407330 cri.go:89] found id: ""
	I1210 06:36:03.609090  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.609097  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:03.609105  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:03.609115  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:03.686817  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:03.686837  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:03.716372  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:03.716389  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:03.784121  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:03.784140  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:03.799951  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:03.799970  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:03.868350  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:03.860456   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.860967   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.862601   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.863024   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.864532   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:03.860456   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.860967   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.862601   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.863024   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.864532   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:06.369008  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:06.379783  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:06.379844  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:06.413424  407330 cri.go:89] found id: ""
	I1210 06:36:06.413438  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.413452  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:06.413457  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:06.413518  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:06.455432  407330 cri.go:89] found id: ""
	I1210 06:36:06.455446  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.455453  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:06.455458  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:06.455518  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:06.484987  407330 cri.go:89] found id: ""
	I1210 06:36:06.485002  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.485011  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:06.485016  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:06.485079  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:06.510864  407330 cri.go:89] found id: ""
	I1210 06:36:06.510879  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.510887  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:06.510892  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:06.510955  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:06.536841  407330 cri.go:89] found id: ""
	I1210 06:36:06.536856  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.536863  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:06.536868  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:06.536928  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:06.563896  407330 cri.go:89] found id: ""
	I1210 06:36:06.563911  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.563918  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:06.563923  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:06.563982  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:06.588959  407330 cri.go:89] found id: ""
	I1210 06:36:06.588973  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.588981  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:06.588988  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:06.588998  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:06.665721  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:06.665743  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:06.694509  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:06.694527  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:06.761392  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:06.761412  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:06.776431  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:06.776448  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:06.839723  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:06.830973   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.831471   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.833376   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.834107   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.835971   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:06.830973   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.831471   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.833376   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.834107   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.835971   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:09.340200  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:09.350423  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:09.350492  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:09.377180  407330 cri.go:89] found id: ""
	I1210 06:36:09.377216  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.377224  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:09.377229  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:09.377296  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:09.408780  407330 cri.go:89] found id: ""
	I1210 06:36:09.408794  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.408810  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:09.408817  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:09.408891  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:09.439014  407330 cri.go:89] found id: ""
	I1210 06:36:09.439028  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.439046  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:09.439051  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:09.439123  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:09.465550  407330 cri.go:89] found id: ""
	I1210 06:36:09.465570  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.465577  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:09.465582  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:09.465640  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:09.495077  407330 cri.go:89] found id: ""
	I1210 06:36:09.495092  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.495099  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:09.495104  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:09.495160  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:09.524259  407330 cri.go:89] found id: ""
	I1210 06:36:09.524283  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.524291  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:09.524296  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:09.524365  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:09.552397  407330 cri.go:89] found id: ""
	I1210 06:36:09.552411  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.552428  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:09.552435  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:09.552445  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:09.617989  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:09.618009  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:09.633375  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:09.633391  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:09.703345  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:09.695844   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.696518   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.697933   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.698390   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.699860   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:09.695844   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.696518   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.697933   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.698390   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.699860   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:09.703356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:09.703368  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:09.780941  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:09.780963  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:12.311981  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:12.322588  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:12.322649  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:12.348408  407330 cri.go:89] found id: ""
	I1210 06:36:12.348423  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.348430  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:12.348436  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:12.348494  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:12.381450  407330 cri.go:89] found id: ""
	I1210 06:36:12.381465  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.381492  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:12.381497  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:12.381565  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:12.421286  407330 cri.go:89] found id: ""
	I1210 06:36:12.421301  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.421309  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:12.421314  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:12.421381  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:12.453573  407330 cri.go:89] found id: ""
	I1210 06:36:12.453598  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.453605  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:12.453611  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:12.453677  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:12.480195  407330 cri.go:89] found id: ""
	I1210 06:36:12.480210  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.480218  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:12.480225  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:12.480290  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:12.505648  407330 cri.go:89] found id: ""
	I1210 06:36:12.505662  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.505669  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:12.505674  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:12.505732  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:12.532083  407330 cri.go:89] found id: ""
	I1210 06:36:12.532097  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.532104  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:12.532112  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:12.532125  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:12.598623  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:12.598646  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:12.614317  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:12.614336  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:12.686805  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:12.678148   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.678809   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.680931   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.681479   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.683127   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:12.678148   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.678809   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.680931   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.681479   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.683127   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:12.686817  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:12.686828  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:12.768698  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:12.768719  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:15.302091  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:15.312582  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:15.312644  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:15.338874  407330 cri.go:89] found id: ""
	I1210 06:36:15.338889  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.338897  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:15.338902  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:15.338962  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:15.365600  407330 cri.go:89] found id: ""
	I1210 06:36:15.365614  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.365621  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:15.365627  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:15.365687  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:15.405324  407330 cri.go:89] found id: ""
	I1210 06:36:15.405339  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.405346  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:15.405352  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:15.405411  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:15.438276  407330 cri.go:89] found id: ""
	I1210 06:36:15.438290  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.438298  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:15.438304  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:15.438362  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:15.465120  407330 cri.go:89] found id: ""
	I1210 06:36:15.465135  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.465142  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:15.465147  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:15.465226  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:15.490880  407330 cri.go:89] found id: ""
	I1210 06:36:15.490894  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.490901  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:15.490906  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:15.490968  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:15.517171  407330 cri.go:89] found id: ""
	I1210 06:36:15.517208  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.517215  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:15.517224  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:15.517235  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:15.580940  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:15.573253   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.573879   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575387   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575909   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.577385   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:15.573253   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.573879   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575387   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575909   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.577385   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:15.580950  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:15.580962  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:15.657832  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:15.657853  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:15.690721  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:15.690738  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:15.755970  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:15.755993  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:18.272507  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:18.282762  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:18.282822  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:18.312952  407330 cri.go:89] found id: ""
	I1210 06:36:18.312966  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.312980  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:18.312986  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:18.313048  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:18.340174  407330 cri.go:89] found id: ""
	I1210 06:36:18.340189  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.340196  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:18.340201  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:18.340260  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:18.365096  407330 cri.go:89] found id: ""
	I1210 06:36:18.365111  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.365118  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:18.365122  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:18.365178  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:18.408189  407330 cri.go:89] found id: ""
	I1210 06:36:18.408203  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.408210  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:18.408215  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:18.408271  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:18.439330  407330 cri.go:89] found id: ""
	I1210 06:36:18.439344  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.439351  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:18.439357  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:18.439413  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:18.471472  407330 cri.go:89] found id: ""
	I1210 06:36:18.471486  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.471493  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:18.471498  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:18.471561  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:18.499541  407330 cri.go:89] found id: ""
	I1210 06:36:18.499555  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.499562  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:18.499569  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:18.499579  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:18.566266  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:18.566288  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:18.581335  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:18.581351  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:18.649633  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:18.642053   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.642777   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644268   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644684   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.646194   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:18.642053   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.642777   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644268   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644684   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.646194   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:18.649644  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:18.649657  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:18.727427  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:18.727447  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:21.256173  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:21.266342  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:21.266401  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:21.291198  407330 cri.go:89] found id: ""
	I1210 06:36:21.291212  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.291219  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:21.291224  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:21.291285  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:21.317809  407330 cri.go:89] found id: ""
	I1210 06:36:21.317823  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.317831  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:21.317836  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:21.317893  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:21.349023  407330 cri.go:89] found id: ""
	I1210 06:36:21.349038  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.349046  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:21.349051  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:21.349112  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:21.377021  407330 cri.go:89] found id: ""
	I1210 06:36:21.377036  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.377043  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:21.377049  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:21.377128  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:21.414828  407330 cri.go:89] found id: ""
	I1210 06:36:21.414843  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.414853  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:21.414858  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:21.414924  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:21.448750  407330 cri.go:89] found id: ""
	I1210 06:36:21.448765  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.448772  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:21.448778  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:21.448836  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:21.475060  407330 cri.go:89] found id: ""
	I1210 06:36:21.475082  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.475089  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:21.475097  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:21.475109  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:21.544320  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:21.544350  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:21.559538  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:21.559554  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:21.623730  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:21.615598   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.616210   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.617863   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.618444   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.620054   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:21.615598   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.616210   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.617863   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.618444   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.620054   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:21.623741  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:21.623754  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:21.703706  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:21.703726  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:24.232360  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:24.242917  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:24.242977  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:24.272666  407330 cri.go:89] found id: ""
	I1210 06:36:24.272681  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.272688  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:24.272693  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:24.272762  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:24.298359  407330 cri.go:89] found id: ""
	I1210 06:36:24.298374  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.298381  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:24.298386  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:24.298448  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:24.324096  407330 cri.go:89] found id: ""
	I1210 06:36:24.324110  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.324117  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:24.324122  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:24.324180  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:24.352195  407330 cri.go:89] found id: ""
	I1210 06:36:24.352210  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.352217  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:24.352223  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:24.352281  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:24.392094  407330 cri.go:89] found id: ""
	I1210 06:36:24.392109  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.392116  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:24.392121  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:24.392180  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:24.433688  407330 cri.go:89] found id: ""
	I1210 06:36:24.433702  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.433716  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:24.433721  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:24.433780  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:24.461088  407330 cri.go:89] found id: ""
	I1210 06:36:24.461103  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.461110  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:24.461118  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:24.461140  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:24.491187  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:24.491203  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:24.557420  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:24.557442  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:24.572719  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:24.572736  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:24.638182  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:24.629865   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.630603   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632247   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632788   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.634516   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:24.629865   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.630603   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632247   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632788   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.634516   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:24.638192  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:24.638204  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:27.215263  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:27.225429  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:27.225490  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:27.250600  407330 cri.go:89] found id: ""
	I1210 06:36:27.250623  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.250630  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:27.250636  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:27.250696  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:27.275244  407330 cri.go:89] found id: ""
	I1210 06:36:27.275258  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.275266  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:27.275271  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:27.275337  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:27.303675  407330 cri.go:89] found id: ""
	I1210 06:36:27.303699  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.303707  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:27.303713  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:27.303779  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:27.329179  407330 cri.go:89] found id: ""
	I1210 06:36:27.329211  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.329219  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:27.329225  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:27.329294  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:27.354254  407330 cri.go:89] found id: ""
	I1210 06:36:27.354269  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.354276  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:27.354282  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:27.354340  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:27.386524  407330 cri.go:89] found id: ""
	I1210 06:36:27.386539  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.386546  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:27.386552  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:27.386608  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:27.419941  407330 cri.go:89] found id: ""
	I1210 06:36:27.419964  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.419972  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:27.419980  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:27.419990  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:27.489413  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:27.489436  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:27.504358  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:27.504375  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:27.572076  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:27.564125   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.564752   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.566500   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.567122   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.568559   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:27.564125   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.564752   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.566500   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.567122   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.568559   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:27.572087  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:27.572097  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:27.652684  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:27.652704  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:30.186931  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:30.198655  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:30.198720  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:30.226217  407330 cri.go:89] found id: ""
	I1210 06:36:30.226239  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.226247  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:30.226252  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:30.226319  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:30.254245  407330 cri.go:89] found id: ""
	I1210 06:36:30.254261  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.254268  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:30.254273  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:30.254331  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:30.282139  407330 cri.go:89] found id: ""
	I1210 06:36:30.282154  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.282162  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:30.282167  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:30.282227  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:30.308968  407330 cri.go:89] found id: ""
	I1210 06:36:30.308992  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.308999  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:30.309005  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:30.309076  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:30.337543  407330 cri.go:89] found id: ""
	I1210 06:36:30.337558  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.337565  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:30.337570  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:30.337630  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:30.366448  407330 cri.go:89] found id: ""
	I1210 06:36:30.366463  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.366477  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:30.366483  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:30.366542  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:30.404619  407330 cri.go:89] found id: ""
	I1210 06:36:30.404641  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.404649  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:30.404656  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:30.404667  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:30.484453  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:30.484481  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:30.499101  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:30.499118  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:30.561567  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:30.553438   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.554141   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.555797   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.556329   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.557890   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:30.553438   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.554141   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.555797   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.556329   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.557890   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:30.561578  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:30.561589  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:30.638801  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:30.638822  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:33.169370  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:33.179597  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:33.179662  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:33.204216  407330 cri.go:89] found id: ""
	I1210 06:36:33.204230  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.204246  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:33.204252  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:33.204309  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:33.229498  407330 cri.go:89] found id: ""
	I1210 06:36:33.229512  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.229519  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:33.229524  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:33.229580  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:33.255490  407330 cri.go:89] found id: ""
	I1210 06:36:33.255505  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.255521  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:33.255527  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:33.255593  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:33.283936  407330 cri.go:89] found id: ""
	I1210 06:36:33.283960  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.283968  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:33.283974  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:33.284052  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:33.308959  407330 cri.go:89] found id: ""
	I1210 06:36:33.308974  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.308984  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:33.308990  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:33.309058  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:33.335830  407330 cri.go:89] found id: ""
	I1210 06:36:33.335853  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.335860  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:33.335866  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:33.335936  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:33.362154  407330 cri.go:89] found id: ""
	I1210 06:36:33.362179  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.362187  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:33.362196  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:33.362208  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:33.410395  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:33.410413  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:33.480770  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:33.480789  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:33.496511  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:33.496527  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:33.563939  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:33.556146   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.556663   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558166   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558668   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.560192   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:33.556146   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.556663   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558166   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558668   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.560192   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:33.563950  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:33.563961  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:36.141828  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:36.152734  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:36.152795  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:36.178688  407330 cri.go:89] found id: ""
	I1210 06:36:36.178703  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.178710  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:36.178716  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:36.178776  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:36.205685  407330 cri.go:89] found id: ""
	I1210 06:36:36.205700  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.205707  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:36.205712  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:36.205771  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:36.231383  407330 cri.go:89] found id: ""
	I1210 06:36:36.231398  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.231411  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:36.231418  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:36.231480  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:36.257291  407330 cri.go:89] found id: ""
	I1210 06:36:36.257316  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.257324  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:36.257329  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:36.257400  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:36.287683  407330 cri.go:89] found id: ""
	I1210 06:36:36.287697  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.287704  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:36.287709  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:36.287767  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:36.313785  407330 cri.go:89] found id: ""
	I1210 06:36:36.313799  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.313807  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:36.313812  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:36.313871  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:36.339325  407330 cri.go:89] found id: ""
	I1210 06:36:36.339339  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.339347  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:36.339356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:36.339369  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:36.421249  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:36.421268  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:36.458225  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:36.458243  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:36.528365  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:36.528384  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:36.544683  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:36.544705  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:36.611624  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:36.602655   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.603473   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605101   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605888   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.607572   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:36.602655   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.603473   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605101   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605888   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.607572   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:39.111891  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:39.122952  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:39.123016  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:39.151788  407330 cri.go:89] found id: ""
	I1210 06:36:39.151817  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.151825  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:39.151831  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:39.151902  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:39.176656  407330 cri.go:89] found id: ""
	I1210 06:36:39.176679  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.176686  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:39.176691  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:39.176759  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:39.203206  407330 cri.go:89] found id: ""
	I1210 06:36:39.203220  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.203227  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:39.203233  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:39.203289  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:39.228848  407330 cri.go:89] found id: ""
	I1210 06:36:39.228862  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.228869  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:39.228875  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:39.228933  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:39.258475  407330 cri.go:89] found id: ""
	I1210 06:36:39.258512  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.258519  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:39.258524  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:39.258589  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:39.283240  407330 cri.go:89] found id: ""
	I1210 06:36:39.283254  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.283261  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:39.283268  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:39.283328  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:39.312591  407330 cri.go:89] found id: ""
	I1210 06:36:39.312604  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.312611  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:39.312619  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:39.312629  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:39.380680  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:39.380703  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:39.397793  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:39.397809  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:39.469117  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:39.460579   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.461325   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463132   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463721   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.465358   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:39.460579   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.461325   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463132   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463721   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.465358   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:39.469128  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:39.469139  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:39.546111  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:39.546131  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:42.076431  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:42.089265  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:42.089335  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:42.121496  407330 cri.go:89] found id: ""
	I1210 06:36:42.121512  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.121520  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:42.121526  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:42.121593  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:42.151688  407330 cri.go:89] found id: ""
	I1210 06:36:42.151704  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.151712  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:42.151717  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:42.151784  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:42.190925  407330 cri.go:89] found id: ""
	I1210 06:36:42.190942  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.190949  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:42.190955  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:42.191063  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:42.225827  407330 cri.go:89] found id: ""
	I1210 06:36:42.225849  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.225857  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:42.225863  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:42.225931  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:42.254453  407330 cri.go:89] found id: ""
	I1210 06:36:42.254467  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.254475  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:42.254480  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:42.254557  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:42.281514  407330 cri.go:89] found id: ""
	I1210 06:36:42.281536  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.281545  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:42.281550  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:42.281615  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:42.309082  407330 cri.go:89] found id: ""
	I1210 06:36:42.309097  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.309105  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:42.309115  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:42.309127  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:42.325376  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:42.325393  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:42.394971  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:42.386397   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.387396   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389262   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389603   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.390932   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:42.386397   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.387396   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389262   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389603   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.390932   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:42.394982  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:42.394993  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:42.480444  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:42.480463  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:42.513077  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:42.513094  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:45.082079  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:45.095928  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:45.096005  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:45.136147  407330 cri.go:89] found id: ""
	I1210 06:36:45.136165  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.136172  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:45.136178  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:45.136321  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:45.171561  407330 cri.go:89] found id: ""
	I1210 06:36:45.171577  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.171584  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:45.171590  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:45.171667  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:45.214225  407330 cri.go:89] found id: ""
	I1210 06:36:45.214243  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.214277  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:45.214282  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:45.214364  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:45.274027  407330 cri.go:89] found id: ""
	I1210 06:36:45.274044  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.274052  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:45.274058  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:45.274128  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:45.321536  407330 cri.go:89] found id: ""
	I1210 06:36:45.321553  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.321561  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:45.321567  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:45.321719  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:45.355270  407330 cri.go:89] found id: ""
	I1210 06:36:45.355285  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.355303  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:45.355310  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:45.355386  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:45.388777  407330 cri.go:89] found id: ""
	I1210 06:36:45.388801  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.388809  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:45.388817  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:45.388827  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:45.478699  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:45.478723  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:45.507903  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:45.507921  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:45.575844  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:45.575864  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:45.591861  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:45.591885  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:45.656312  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:45.648123   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.648663   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650406   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650984   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.652724   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:45.648123   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.648663   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650406   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650984   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.652724   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:48.156556  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:48.166976  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:48.167036  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:48.192782  407330 cri.go:89] found id: ""
	I1210 06:36:48.192807  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.192817  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:48.192824  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:48.192889  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:48.218586  407330 cri.go:89] found id: ""
	I1210 06:36:48.218600  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.218607  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:48.218623  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:48.218682  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:48.244757  407330 cri.go:89] found id: ""
	I1210 06:36:48.244771  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.244778  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:48.244783  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:48.244841  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:48.271671  407330 cri.go:89] found id: ""
	I1210 06:36:48.271685  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.271692  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:48.271697  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:48.271756  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:48.298466  407330 cri.go:89] found id: ""
	I1210 06:36:48.298480  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.298487  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:48.298493  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:48.298603  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:48.324794  407330 cri.go:89] found id: ""
	I1210 06:36:48.324808  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.324825  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:48.324830  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:48.324888  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:48.351036  407330 cri.go:89] found id: ""
	I1210 06:36:48.351051  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.351058  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:48.351065  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:48.351076  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:48.384287  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:48.384303  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:48.462134  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:48.462154  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:48.477421  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:48.477439  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:48.544257  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:48.535925   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.536728   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538380   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538978   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.540777   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:48.535925   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.536728   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538380   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538978   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.540777   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:48.544268  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:48.544279  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:51.122102  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:51.133691  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:51.133753  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:51.161091  407330 cri.go:89] found id: ""
	I1210 06:36:51.161106  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.161113  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:51.161119  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:51.161217  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:51.189850  407330 cri.go:89] found id: ""
	I1210 06:36:51.189865  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.189872  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:51.189877  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:51.189944  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:51.215676  407330 cri.go:89] found id: ""
	I1210 06:36:51.215691  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.215698  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:51.215703  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:51.215763  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:51.241638  407330 cri.go:89] found id: ""
	I1210 06:36:51.241653  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.241660  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:51.241666  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:51.241728  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:51.266737  407330 cri.go:89] found id: ""
	I1210 06:36:51.266752  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.266759  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:51.266764  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:51.266823  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:51.291896  407330 cri.go:89] found id: ""
	I1210 06:36:51.291911  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.291918  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:51.291923  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:51.291982  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:51.317807  407330 cri.go:89] found id: ""
	I1210 06:36:51.317823  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.317830  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:51.317838  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:51.317849  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:51.385260  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:51.385280  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:51.400443  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:51.400459  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:51.479768  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:51.472416   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.472835   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474317   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474686   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.476122   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:51.472416   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.472835   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474317   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474686   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.476122   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:51.479778  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:51.479789  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:51.556275  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:51.556295  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:54.087759  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:54.098770  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:54.098837  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:54.124003  407330 cri.go:89] found id: ""
	I1210 06:36:54.124017  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.124025  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:54.124030  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:54.124091  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:54.150185  407330 cri.go:89] found id: ""
	I1210 06:36:54.150200  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.150207  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:54.150213  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:54.150272  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:54.177121  407330 cri.go:89] found id: ""
	I1210 06:36:54.177135  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.177143  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:54.177148  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:54.177248  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:54.202926  407330 cri.go:89] found id: ""
	I1210 06:36:54.202941  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.202948  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:54.202953  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:54.203013  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:54.232186  407330 cri.go:89] found id: ""
	I1210 06:36:54.232200  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.232215  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:54.232221  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:54.232291  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:54.257570  407330 cri.go:89] found id: ""
	I1210 06:36:54.257584  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.257592  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:54.257597  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:54.257656  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:54.282060  407330 cri.go:89] found id: ""
	I1210 06:36:54.282074  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.282081  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:54.282088  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:54.282099  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:54.347704  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:54.347728  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:54.362634  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:54.362652  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:54.450702  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:54.442872   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.443270   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.444790   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.445114   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.446658   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:54.442872   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.443270   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.444790   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.445114   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.446658   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:54.450713  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:54.450723  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:54.528465  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:54.528487  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:57.060906  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:57.071228  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:57.071304  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:57.096846  407330 cri.go:89] found id: ""
	I1210 06:36:57.096859  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.096867  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:57.096872  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:57.096932  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:57.122828  407330 cri.go:89] found id: ""
	I1210 06:36:57.122845  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.122852  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:57.122858  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:57.122918  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:57.154708  407330 cri.go:89] found id: ""
	I1210 06:36:57.154723  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.154730  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:57.154736  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:57.154798  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:57.181521  407330 cri.go:89] found id: ""
	I1210 06:36:57.181543  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.181550  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:57.181556  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:57.181620  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:57.206722  407330 cri.go:89] found id: ""
	I1210 06:36:57.206736  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.206743  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:57.206749  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:57.206811  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:57.232129  407330 cri.go:89] found id: ""
	I1210 06:36:57.232143  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.232150  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:57.232155  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:57.232212  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:57.258044  407330 cri.go:89] found id: ""
	I1210 06:36:57.258057  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.258064  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:57.258071  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:57.258081  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:57.285624  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:57.285640  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:57.351757  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:57.351778  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:57.367138  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:57.367157  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:57.458560  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:57.450875   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.451498   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453015   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453535   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.454978   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:57.450875   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.451498   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453015   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453535   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.454978   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:57.458571  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:57.458582  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:00.035650  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:00.112450  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:00.112528  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:00.233350  407330 cri.go:89] found id: ""
	I1210 06:37:00.233368  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.233377  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:00.233383  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:00.233454  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:00.328120  407330 cri.go:89] found id: ""
	I1210 06:37:00.328136  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.328144  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:00.328150  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:00.328216  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:00.369964  407330 cri.go:89] found id: ""
	I1210 06:37:00.369981  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.369989  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:00.369995  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:00.370065  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:00.412610  407330 cri.go:89] found id: ""
	I1210 06:37:00.412628  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.412636  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:00.412642  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:00.412717  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:00.458193  407330 cri.go:89] found id: ""
	I1210 06:37:00.458212  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.458220  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:00.458225  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:00.458300  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:00.486825  407330 cri.go:89] found id: ""
	I1210 06:37:00.486840  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.486848  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:00.486853  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:00.486912  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:00.514588  407330 cri.go:89] found id: ""
	I1210 06:37:00.514604  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.514612  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:00.514631  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:00.514643  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:00.544788  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:00.544807  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:00.611036  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:00.611058  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:00.625887  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:00.625904  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:00.692620  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:00.684863   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.685709   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687299   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687611   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.689091   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:00.684863   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.685709   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687299   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687611   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.689091   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:00.692631  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:00.692642  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:03.270067  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:03.280541  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:03.280604  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:03.306695  407330 cri.go:89] found id: ""
	I1210 06:37:03.306710  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.306718  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:03.306724  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:03.306788  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:03.335215  407330 cri.go:89] found id: ""
	I1210 06:37:03.335230  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.335237  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:03.335243  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:03.335302  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:03.366128  407330 cri.go:89] found id: ""
	I1210 06:37:03.366143  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.366150  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:03.366155  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:03.366214  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:03.407867  407330 cri.go:89] found id: ""
	I1210 06:37:03.407883  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.407891  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:03.407896  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:03.407957  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:03.439688  407330 cri.go:89] found id: ""
	I1210 06:37:03.439703  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.439710  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:03.439716  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:03.439776  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:03.470617  407330 cri.go:89] found id: ""
	I1210 06:37:03.470633  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.470640  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:03.470645  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:03.470708  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:03.495476  407330 cri.go:89] found id: ""
	I1210 06:37:03.495491  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.495498  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:03.495506  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:03.495516  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:03.562017  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:03.562037  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:03.577764  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:03.577782  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:03.644175  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:03.636471   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.637208   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.638686   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.639152   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.640632   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:03.636471   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.637208   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.638686   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.639152   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.640632   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:03.644187  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:03.644198  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:03.721903  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:03.721925  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:06.250929  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:06.261704  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:06.261767  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:06.290140  407330 cri.go:89] found id: ""
	I1210 06:37:06.290155  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.290163  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:06.290168  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:06.290226  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:06.315796  407330 cri.go:89] found id: ""
	I1210 06:37:06.315811  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.315819  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:06.315826  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:06.315884  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:06.340906  407330 cri.go:89] found id: ""
	I1210 06:37:06.340920  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.340927  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:06.340932  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:06.340996  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:06.367812  407330 cri.go:89] found id: ""
	I1210 06:37:06.367827  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.367835  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:06.367840  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:06.367899  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:06.401044  407330 cri.go:89] found id: ""
	I1210 06:37:06.401058  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.401065  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:06.401070  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:06.401166  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:06.438778  407330 cri.go:89] found id: ""
	I1210 06:37:06.438799  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.438806  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:06.438811  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:06.438892  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:06.466678  407330 cri.go:89] found id: ""
	I1210 06:37:06.466692  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.466700  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:06.466708  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:06.466718  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:06.544177  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:06.544200  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:06.573010  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:06.573027  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:06.640533  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:06.640553  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:06.656110  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:06.656128  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:06.723670  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:06.715242   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.715893   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.717714   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.718356   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.720062   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:06.715242   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.715893   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.717714   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.718356   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.720062   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:09.224405  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:09.234680  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:09.234741  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:09.260264  407330 cri.go:89] found id: ""
	I1210 06:37:09.260278  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.260285  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:09.260290  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:09.260348  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:09.285806  407330 cri.go:89] found id: ""
	I1210 06:37:09.285823  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.285830  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:09.285836  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:09.285899  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:09.315817  407330 cri.go:89] found id: ""
	I1210 06:37:09.315832  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.315840  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:09.315845  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:09.315901  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:09.346059  407330 cri.go:89] found id: ""
	I1210 06:37:09.346074  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.346081  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:09.346087  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:09.346144  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:09.381275  407330 cri.go:89] found id: ""
	I1210 06:37:09.381290  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.381297  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:09.381303  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:09.381366  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:09.414891  407330 cri.go:89] found id: ""
	I1210 06:37:09.414905  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.414912  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:09.414918  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:09.414979  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:09.443742  407330 cri.go:89] found id: ""
	I1210 06:37:09.443757  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.443763  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:09.443771  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:09.443781  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:09.510740  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:09.510762  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:09.526338  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:09.526355  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:09.590739  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:09.582122   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.582824   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.584640   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.585274   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.587041   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:09.582122   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.582824   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.584640   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.585274   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.587041   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:09.590750  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:09.590762  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:09.668271  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:09.668292  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:12.200039  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:12.210520  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:12.210590  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:12.237060  407330 cri.go:89] found id: ""
	I1210 06:37:12.237075  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.237083  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:12.237088  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:12.237160  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:12.263263  407330 cri.go:89] found id: ""
	I1210 06:37:12.263277  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.263284  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:12.263290  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:12.263354  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:12.291756  407330 cri.go:89] found id: ""
	I1210 06:37:12.291772  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.291780  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:12.291785  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:12.291847  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:12.321162  407330 cri.go:89] found id: ""
	I1210 06:37:12.321177  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.321213  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:12.321218  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:12.321279  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:12.347025  407330 cri.go:89] found id: ""
	I1210 06:37:12.347039  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.347054  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:12.347060  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:12.347121  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:12.376035  407330 cri.go:89] found id: ""
	I1210 06:37:12.376050  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.376058  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:12.376064  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:12.376126  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:12.410703  407330 cri.go:89] found id: ""
	I1210 06:37:12.410717  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.410724  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:12.410733  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:12.410744  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:12.486662  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:12.486686  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:12.502236  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:12.502255  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:12.568662  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:12.560235   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.561423   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.562538   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.563321   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.564916   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:12.560235   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.561423   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.562538   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.563321   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.564916   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:12.568672  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:12.568683  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:12.645878  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:12.645901  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:15.177927  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:15.191193  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:15.191288  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:15.219881  407330 cri.go:89] found id: ""
	I1210 06:37:15.219896  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.219904  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:15.219911  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:15.219971  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:15.247528  407330 cri.go:89] found id: ""
	I1210 06:37:15.247544  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.247551  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:15.247557  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:15.247620  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:15.274888  407330 cri.go:89] found id: ""
	I1210 06:37:15.274903  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.274911  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:15.274920  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:15.274979  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:15.300280  407330 cri.go:89] found id: ""
	I1210 06:37:15.300295  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.300302  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:15.300308  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:15.300369  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:15.325424  407330 cri.go:89] found id: ""
	I1210 06:37:15.325438  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.325445  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:15.325450  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:15.325512  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:15.359467  407330 cri.go:89] found id: ""
	I1210 06:37:15.359482  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.359490  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:15.359495  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:15.359551  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:15.399967  407330 cri.go:89] found id: ""
	I1210 06:37:15.399982  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.399990  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:15.399998  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:15.400019  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:15.477621  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:15.477643  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:15.493123  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:15.493140  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:15.564193  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:15.554932   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.555848   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.557673   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.558388   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.560046   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:15.554932   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.555848   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.557673   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.558388   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.560046   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:15.564206  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:15.564216  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:15.640233  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:15.640254  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:18.174394  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:18.186025  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:18.186097  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:18.215781  407330 cri.go:89] found id: ""
	I1210 06:37:18.215795  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.215814  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:18.215819  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:18.215877  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:18.241012  407330 cri.go:89] found id: ""
	I1210 06:37:18.241033  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.241044  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:18.241054  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:18.241155  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:18.270058  407330 cri.go:89] found id: ""
	I1210 06:37:18.270072  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.270079  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:18.270090  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:18.270147  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:18.297554  407330 cri.go:89] found id: ""
	I1210 06:37:18.297576  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.297593  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:18.297603  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:18.297695  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:18.330116  407330 cri.go:89] found id: ""
	I1210 06:37:18.330130  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.330136  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:18.330142  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:18.330217  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:18.360475  407330 cri.go:89] found id: ""
	I1210 06:37:18.360489  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.360496  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:18.360502  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:18.360570  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:18.393014  407330 cri.go:89] found id: ""
	I1210 06:37:18.393028  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.393035  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:18.393043  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:18.393064  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:18.412466  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:18.412484  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:18.485431  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:18.477889   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.478765   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.479651   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.480367   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.481965   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:18.477889   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.478765   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.479651   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.480367   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.481965   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:18.485441  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:18.485452  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:18.561043  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:18.561064  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:18.588628  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:18.588644  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:21.156119  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:21.166481  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:21.166541  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:21.191589  407330 cri.go:89] found id: ""
	I1210 06:37:21.191604  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.191611  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:21.191625  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:21.191689  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:21.217715  407330 cri.go:89] found id: ""
	I1210 06:37:21.217730  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.217738  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:21.217744  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:21.217811  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:21.246916  407330 cri.go:89] found id: ""
	I1210 06:37:21.246930  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.246945  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:21.246950  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:21.247005  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:21.271644  407330 cri.go:89] found id: ""
	I1210 06:37:21.271659  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.271666  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:21.271672  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:21.271739  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:21.299971  407330 cri.go:89] found id: ""
	I1210 06:37:21.299985  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.299993  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:21.299998  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:21.300057  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:21.325497  407330 cri.go:89] found id: ""
	I1210 06:37:21.325512  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.325519  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:21.325524  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:21.325583  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:21.351049  407330 cri.go:89] found id: ""
	I1210 06:37:21.351064  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.351071  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:21.351079  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:21.351095  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:21.421855  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:21.421874  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:21.437324  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:21.437341  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:21.499548  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:21.490639   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.491333   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493043   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493634   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.495290   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:21.490639   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.491333   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493043   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493634   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.495290   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:21.499604  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:21.499615  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:21.576803  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:21.576824  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:24.110608  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:24.121006  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:24.121068  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:24.146461  407330 cri.go:89] found id: ""
	I1210 06:37:24.146476  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.146483  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:24.146488  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:24.146601  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:24.172866  407330 cri.go:89] found id: ""
	I1210 06:37:24.172882  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.172889  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:24.172894  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:24.172956  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:24.199448  407330 cri.go:89] found id: ""
	I1210 06:37:24.199463  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.199470  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:24.199475  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:24.199535  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:24.229234  407330 cri.go:89] found id: ""
	I1210 06:37:24.229250  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.229257  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:24.229263  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:24.229323  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:24.254311  407330 cri.go:89] found id: ""
	I1210 06:37:24.254326  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.254334  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:24.254339  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:24.254401  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:24.284029  407330 cri.go:89] found id: ""
	I1210 06:37:24.284044  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.284051  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:24.284056  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:24.284131  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:24.309694  407330 cri.go:89] found id: ""
	I1210 06:37:24.309708  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.309715  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:24.309724  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:24.309735  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:24.372553  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:24.363947   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.364695   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.366686   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.367278   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.368967   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:24.363947   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.364695   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.366686   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.367278   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.368967   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:24.372563  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:24.372575  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:24.464562  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:24.464585  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:24.493762  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:24.493778  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:24.563092  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:24.563113  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:27.078938  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:27.089277  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:27.089338  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:27.114399  407330 cri.go:89] found id: ""
	I1210 06:37:27.114413  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.114421  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:27.114427  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:27.114491  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:27.144680  407330 cri.go:89] found id: ""
	I1210 06:37:27.144695  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.144702  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:27.144707  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:27.144765  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:27.168950  407330 cri.go:89] found id: ""
	I1210 06:37:27.168965  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.168972  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:27.168977  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:27.169034  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:27.196136  407330 cri.go:89] found id: ""
	I1210 06:37:27.196151  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.196159  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:27.196164  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:27.196221  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:27.225403  407330 cri.go:89] found id: ""
	I1210 06:37:27.225418  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.225426  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:27.225432  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:27.225492  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:27.252922  407330 cri.go:89] found id: ""
	I1210 06:37:27.252938  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.252945  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:27.252950  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:27.253009  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:27.278155  407330 cri.go:89] found id: ""
	I1210 06:37:27.278169  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.278177  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:27.278185  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:27.278197  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:27.309557  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:27.309573  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:27.385911  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:27.385939  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:27.404671  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:27.404689  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:27.482019  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:27.473831   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.474734   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476086   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476735   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.478362   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:27.473831   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.474734   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476086   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476735   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.478362   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:27.482029  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:27.482040  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:30.059859  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:30.073120  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:30.073221  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:30.104876  407330 cri.go:89] found id: ""
	I1210 06:37:30.104902  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.104910  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:30.104915  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:30.104992  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:30.133968  407330 cri.go:89] found id: ""
	I1210 06:37:30.133984  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.133999  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:30.134007  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:30.134079  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:30.162870  407330 cri.go:89] found id: ""
	I1210 06:37:30.162888  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.162895  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:30.162901  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:30.162965  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:30.190402  407330 cri.go:89] found id: ""
	I1210 06:37:30.190416  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.190424  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:30.190429  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:30.190488  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:30.219884  407330 cri.go:89] found id: ""
	I1210 06:37:30.219913  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.219920  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:30.219926  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:30.219999  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:30.246737  407330 cri.go:89] found id: ""
	I1210 06:37:30.246752  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.246760  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:30.246765  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:30.246825  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:30.273326  407330 cri.go:89] found id: ""
	I1210 06:37:30.273340  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.273348  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:30.273356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:30.273366  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:30.350646  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:30.350667  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:30.385499  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:30.385515  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:30.461766  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:30.461790  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:30.477421  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:30.477438  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:30.539694  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:30.532297   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.532864   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534312   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534817   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.536259   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:30.532297   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.532864   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534312   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534817   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.536259   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:33.041379  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:33.052111  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:33.052178  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:33.080472  407330 cri.go:89] found id: ""
	I1210 06:37:33.080487  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.080494  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:33.080499  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:33.080556  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:33.107304  407330 cri.go:89] found id: ""
	I1210 06:37:33.107319  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.107326  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:33.107331  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:33.107389  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:33.133653  407330 cri.go:89] found id: ""
	I1210 06:37:33.133668  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.133675  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:33.133680  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:33.133740  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:33.159244  407330 cri.go:89] found id: ""
	I1210 06:37:33.159259  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.159266  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:33.159272  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:33.159328  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:33.185378  407330 cri.go:89] found id: ""
	I1210 06:37:33.185393  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.185402  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:33.185407  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:33.185466  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:33.210558  407330 cri.go:89] found id: ""
	I1210 06:37:33.210588  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.210609  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:33.210615  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:33.210672  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:33.235742  407330 cri.go:89] found id: ""
	I1210 06:37:33.235756  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.235773  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:33.235782  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:33.235796  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:33.303992  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:33.304010  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:33.321348  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:33.321367  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:33.396780  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:33.385824   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.386759   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.387788   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.388485   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.390532   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:33.385824   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.386759   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.387788   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.388485   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.390532   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:33.396789  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:33.396800  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:33.483704  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:33.483727  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:36.014717  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:36.026269  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:36.026331  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:36.054956  407330 cri.go:89] found id: ""
	I1210 06:37:36.054982  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.054989  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:36.054995  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:36.055055  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:36.081454  407330 cri.go:89] found id: ""
	I1210 06:37:36.081470  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.081477  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:36.081483  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:36.081544  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:36.112094  407330 cri.go:89] found id: ""
	I1210 06:37:36.112108  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.112116  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:36.112121  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:36.112181  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:36.138426  407330 cri.go:89] found id: ""
	I1210 06:37:36.138441  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.138448  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:36.138453  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:36.138512  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:36.164608  407330 cri.go:89] found id: ""
	I1210 06:37:36.164623  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.164630  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:36.164637  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:36.164693  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:36.192038  407330 cri.go:89] found id: ""
	I1210 06:37:36.192052  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.192059  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:36.192064  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:36.192124  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:36.221044  407330 cri.go:89] found id: ""
	I1210 06:37:36.221058  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.221065  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:36.221073  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:36.221085  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:36.250907  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:36.250923  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:36.316733  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:36.316753  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:36.332493  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:36.332509  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:36.412829  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:36.401482   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404020   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404535   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.405958   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.407122   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:36.401482   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404020   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404535   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.405958   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.407122   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:36.412843  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:36.412857  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:39.007236  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:39.020585  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:39.020658  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:39.046864  407330 cri.go:89] found id: ""
	I1210 06:37:39.046879  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.046886  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:39.046892  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:39.046954  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:39.076119  407330 cri.go:89] found id: ""
	I1210 06:37:39.076143  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.076152  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:39.076157  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:39.076226  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:39.102655  407330 cri.go:89] found id: ""
	I1210 06:37:39.102671  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.102678  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:39.102684  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:39.102746  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:39.128306  407330 cri.go:89] found id: ""
	I1210 06:37:39.128320  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.128327  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:39.128333  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:39.128407  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:39.156045  407330 cri.go:89] found id: ""
	I1210 06:37:39.156069  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.156076  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:39.156087  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:39.156156  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:39.183781  407330 cri.go:89] found id: ""
	I1210 06:37:39.183796  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.183804  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:39.183809  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:39.183867  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:39.209244  407330 cri.go:89] found id: ""
	I1210 06:37:39.209258  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.209266  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:39.209273  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:39.209294  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:39.274373  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:39.274392  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:39.289765  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:39.289782  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:39.353525  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:39.345986   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.346357   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348003   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348560   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.350004   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:39.345986   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.346357   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348003   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348560   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.350004   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:39.353537  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:39.353548  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:39.432803  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:39.432822  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:41.965778  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:41.979117  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:41.979179  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:42.015640  407330 cri.go:89] found id: ""
	I1210 06:37:42.015658  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.015683  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:42.015689  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:42.015759  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:42.048532  407330 cri.go:89] found id: ""
	I1210 06:37:42.048546  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.048553  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:42.048559  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:42.048618  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:42.076982  407330 cri.go:89] found id: ""
	I1210 06:37:42.076998  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.077006  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:42.077012  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:42.077084  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:42.112254  407330 cri.go:89] found id: ""
	I1210 06:37:42.112295  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.112304  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:42.112312  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:42.112393  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:42.150624  407330 cri.go:89] found id: ""
	I1210 06:37:42.150640  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.150647  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:42.150653  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:42.150718  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:42.180813  407330 cri.go:89] found id: ""
	I1210 06:37:42.180845  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.180854  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:42.180860  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:42.180927  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:42.212103  407330 cri.go:89] found id: ""
	I1210 06:37:42.212120  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.212129  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:42.212139  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:42.212151  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:42.228371  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:42.228388  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:42.298333  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:42.290091   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.290977   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.292784   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.293526   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.294529   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:42.290091   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.290977   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.292784   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.293526   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.294529   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:42.298344  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:42.298363  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:42.375054  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:42.375076  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:42.409015  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:42.409031  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:44.985261  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:44.995937  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:44.995997  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:45.074766  407330 cri.go:89] found id: ""
	I1210 06:37:45.074782  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.074790  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:45.074805  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:45.074874  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:45.130730  407330 cri.go:89] found id: ""
	I1210 06:37:45.130747  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.130755  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:45.130760  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:45.130828  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:45.169030  407330 cri.go:89] found id: ""
	I1210 06:37:45.169058  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.169067  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:45.169073  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:45.169157  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:45.215800  407330 cri.go:89] found id: ""
	I1210 06:37:45.215826  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.215835  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:45.215841  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:45.215915  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:45.274656  407330 cri.go:89] found id: ""
	I1210 06:37:45.274675  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.274684  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:45.274689  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:45.274771  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:45.313260  407330 cri.go:89] found id: ""
	I1210 06:37:45.313277  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.313290  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:45.313296  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:45.313418  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:45.347971  407330 cri.go:89] found id: ""
	I1210 06:37:45.347997  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.348005  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:45.348014  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:45.348028  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:45.381763  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:45.381780  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:45.462459  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:45.462482  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:45.477837  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:45.477854  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:45.547658  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:45.539217   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.540334   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.541688   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.542195   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.543964   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:45.539217   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.540334   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.541688   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.542195   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.543964   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:45.547669  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:45.547680  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:48.124454  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:48.134803  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:48.134866  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:48.162481  407330 cri.go:89] found id: ""
	I1210 06:37:48.162498  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.162507  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:48.162512  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:48.162572  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:48.192262  407330 cri.go:89] found id: ""
	I1210 06:37:48.192276  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.192283  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:48.192289  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:48.192350  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:48.220715  407330 cri.go:89] found id: ""
	I1210 06:37:48.220730  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.220737  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:48.220742  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:48.220802  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:48.244954  407330 cri.go:89] found id: ""
	I1210 06:37:48.244968  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.244976  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:48.244981  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:48.245040  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:48.272316  407330 cri.go:89] found id: ""
	I1210 06:37:48.272330  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.272337  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:48.272343  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:48.272399  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:48.300204  407330 cri.go:89] found id: ""
	I1210 06:37:48.300219  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.300226  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:48.300232  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:48.300293  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:48.329747  407330 cri.go:89] found id: ""
	I1210 06:37:48.329762  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.329769  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:48.329777  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:48.329789  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:48.395638  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:48.395658  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:48.411092  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:48.411108  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:48.478819  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:48.470539   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.471330   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.472882   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.473423   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.475010   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:48.470539   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.471330   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.472882   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.473423   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.475010   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:48.478829  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:48.478841  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:48.556858  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:48.556880  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:51.087332  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:51.097952  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:51.098014  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:51.125310  407330 cri.go:89] found id: ""
	I1210 06:37:51.125325  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.125333  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:51.125345  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:51.125424  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:51.152518  407330 cri.go:89] found id: ""
	I1210 06:37:51.152533  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.152541  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:51.152547  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:51.152619  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:51.181199  407330 cri.go:89] found id: ""
	I1210 06:37:51.181214  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.181222  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:51.181233  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:51.181302  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:51.211368  407330 cri.go:89] found id: ""
	I1210 06:37:51.211382  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.211399  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:51.211405  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:51.211473  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:51.240371  407330 cri.go:89] found id: ""
	I1210 06:37:51.240386  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.240413  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:51.240420  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:51.240493  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:51.266983  407330 cri.go:89] found id: ""
	I1210 06:37:51.266998  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.267005  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:51.267010  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:51.267077  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:51.292392  407330 cri.go:89] found id: ""
	I1210 06:37:51.292417  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.292425  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:51.292433  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:51.292443  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:51.357098  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:51.357119  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:51.372292  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:51.372310  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:51.456874  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:51.448584   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.449513   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451286   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451619   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.453250   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:51.448584   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.449513   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451286   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451619   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.453250   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:51.456885  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:51.456896  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:51.532131  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:51.532155  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:54.070226  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:54.081032  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:54.081095  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:54.107855  407330 cri.go:89] found id: ""
	I1210 06:37:54.107871  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.107878  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:54.107884  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:54.107954  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:54.133470  407330 cri.go:89] found id: ""
	I1210 06:37:54.133484  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.133491  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:54.133496  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:54.133556  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:54.160836  407330 cri.go:89] found id: ""
	I1210 06:37:54.160851  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.160859  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:54.160864  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:54.160931  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:54.191664  407330 cri.go:89] found id: ""
	I1210 06:37:54.191679  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.191686  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:54.191692  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:54.191758  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:54.216267  407330 cri.go:89] found id: ""
	I1210 06:37:54.216280  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.216298  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:54.216303  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:54.216370  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:54.241369  407330 cri.go:89] found id: ""
	I1210 06:37:54.241383  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.241390  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:54.241395  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:54.241454  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:54.265711  407330 cri.go:89] found id: ""
	I1210 06:37:54.265725  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.265732  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:54.265740  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:54.265750  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:54.280292  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:54.280314  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:54.343110  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:54.335264   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.335904   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337479   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337979   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.339543   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:54.335264   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.335904   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337479   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337979   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.339543   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:54.343120  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:54.343131  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:54.421398  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:54.421417  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:54.457832  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:54.457849  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:57.030320  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:57.040862  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:57.040923  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:57.065817  407330 cri.go:89] found id: ""
	I1210 06:37:57.065832  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.065840  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:57.065845  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:57.065908  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:57.091828  407330 cri.go:89] found id: ""
	I1210 06:37:57.091842  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.091849  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:57.091855  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:57.091912  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:57.117033  407330 cri.go:89] found id: ""
	I1210 06:37:57.117047  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.117054  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:57.117060  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:57.117128  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:57.143007  407330 cri.go:89] found id: ""
	I1210 06:37:57.143021  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.143028  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:57.143034  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:57.143090  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:57.171364  407330 cri.go:89] found id: ""
	I1210 06:37:57.171379  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.171386  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:57.171391  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:57.171451  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:57.195695  407330 cri.go:89] found id: ""
	I1210 06:37:57.195723  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.195730  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:57.195736  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:57.195802  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:57.225018  407330 cri.go:89] found id: ""
	I1210 06:37:57.225033  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.225040  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:57.225049  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:57.225060  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:57.299878  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:57.291518   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.292410   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294182   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294649   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.296294   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:57.291518   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.292410   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294182   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294649   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.296294   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:57.299889  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:57.299899  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:57.377757  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:57.377778  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:57.420515  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:57.420531  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:57.493246  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:57.493267  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:00.010113  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:00.082560  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:00.082643  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:00.187405  407330 cri.go:89] found id: ""
	I1210 06:38:00.190377  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.190403  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:00.190413  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:00.190506  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:00.256368  407330 cri.go:89] found id: ""
	I1210 06:38:00.256395  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.256405  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:00.256411  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:00.256498  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:00.309570  407330 cri.go:89] found id: ""
	I1210 06:38:00.309587  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.309595  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:00.309602  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:00.309691  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:00.359167  407330 cri.go:89] found id: ""
	I1210 06:38:00.359184  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.359193  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:00.359199  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:00.359284  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:00.401533  407330 cri.go:89] found id: ""
	I1210 06:38:00.401549  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.401557  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:00.401562  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:00.401629  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:00.439769  407330 cri.go:89] found id: ""
	I1210 06:38:00.439784  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.439792  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:00.439797  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:00.439863  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:00.471369  407330 cri.go:89] found id: ""
	I1210 06:38:00.471384  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.471392  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:00.471400  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:00.471412  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:00.504494  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:00.504511  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:00.570722  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:00.570742  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:00.585662  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:00.585679  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:00.648503  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:00.640817   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.641687   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643282   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643584   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.645048   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:00.640817   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.641687   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643282   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643584   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.645048   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:00.648513  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:00.648524  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:03.225660  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:03.235918  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:03.235979  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:03.260969  407330 cri.go:89] found id: ""
	I1210 06:38:03.260984  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.260991  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:03.260996  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:03.261058  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:03.286700  407330 cri.go:89] found id: ""
	I1210 06:38:03.286714  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.286721  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:03.286726  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:03.286785  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:03.315672  407330 cri.go:89] found id: ""
	I1210 06:38:03.315686  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.315694  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:03.315699  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:03.315757  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:03.344486  407330 cri.go:89] found id: ""
	I1210 06:38:03.344501  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.344508  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:03.344517  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:03.344576  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:03.371038  407330 cri.go:89] found id: ""
	I1210 06:38:03.371052  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.371059  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:03.371064  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:03.371127  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:03.404397  407330 cri.go:89] found id: ""
	I1210 06:38:03.404412  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.404420  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:03.404425  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:03.404492  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:03.440935  407330 cri.go:89] found id: ""
	I1210 06:38:03.440949  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.440957  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:03.440965  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:03.440975  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:03.509589  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:03.509610  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:03.525492  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:03.525509  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:03.592907  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:03.584405   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.585238   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587039   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587639   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.589379   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:03.584405   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.585238   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587039   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587639   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.589379   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:03.592926  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:03.592938  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:03.669095  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:03.669114  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:06.198833  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:06.209381  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:06.209457  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:06.234410  407330 cri.go:89] found id: ""
	I1210 06:38:06.234424  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.234431  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:06.234437  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:06.234495  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:06.264001  407330 cri.go:89] found id: ""
	I1210 06:38:06.264016  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.264022  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:06.264028  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:06.264087  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:06.289353  407330 cri.go:89] found id: ""
	I1210 06:38:06.289367  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.289375  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:06.289380  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:06.289442  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:06.318627  407330 cri.go:89] found id: ""
	I1210 06:38:06.318643  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.318651  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:06.318656  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:06.318715  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:06.344169  407330 cri.go:89] found id: ""
	I1210 06:38:06.344183  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.344191  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:06.344196  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:06.344255  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:06.372255  407330 cri.go:89] found id: ""
	I1210 06:38:06.372270  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.372277  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:06.372283  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:06.372346  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:06.410561  407330 cri.go:89] found id: ""
	I1210 06:38:06.410575  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.410582  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:06.410590  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:06.410601  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:06.485685  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:06.485706  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:06.500886  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:06.500904  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:06.569054  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:06.561431   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.562119   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.563630   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.564134   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.565584   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:06.561431   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.562119   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.563630   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.564134   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.565584   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:06.569065  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:06.569078  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:06.650735  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:06.650760  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:09.182920  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:09.193744  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:09.193805  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:09.224238  407330 cri.go:89] found id: ""
	I1210 06:38:09.224253  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.224260  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:09.224265  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:09.224321  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:09.249812  407330 cri.go:89] found id: ""
	I1210 06:38:09.249827  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.249835  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:09.249840  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:09.249900  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:09.275012  407330 cri.go:89] found id: ""
	I1210 06:38:09.275025  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.275032  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:09.275037  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:09.275094  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:09.299472  407330 cri.go:89] found id: ""
	I1210 06:38:09.299500  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.299508  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:09.299513  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:09.299579  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:09.325485  407330 cri.go:89] found id: ""
	I1210 06:38:09.325499  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.325507  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:09.325512  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:09.325567  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:09.350568  407330 cri.go:89] found id: ""
	I1210 06:38:09.350582  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.350589  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:09.350594  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:09.350657  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:09.380510  407330 cri.go:89] found id: ""
	I1210 06:38:09.380524  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.380531  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:09.380548  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:09.380560  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:09.421824  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:09.421840  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:09.497738  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:09.497764  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:09.513692  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:09.513711  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:09.581478  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:09.573930   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.574589   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576111   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576487   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.577997   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:09.573930   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.574589   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576111   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576487   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.577997   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:09.581497  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:09.581507  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:12.158761  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:12.169119  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:12.169177  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:12.194655  407330 cri.go:89] found id: ""
	I1210 06:38:12.194670  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.194677  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:12.194683  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:12.194739  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:12.223200  407330 cri.go:89] found id: ""
	I1210 06:38:12.223216  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.223223  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:12.223228  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:12.223293  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:12.249017  407330 cri.go:89] found id: ""
	I1210 06:38:12.249032  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.249043  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:12.249049  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:12.249110  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:12.274392  407330 cri.go:89] found id: ""
	I1210 06:38:12.274407  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.274414  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:12.274420  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:12.274477  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:12.299224  407330 cri.go:89] found id: ""
	I1210 06:38:12.299238  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.299245  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:12.299250  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:12.299310  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:12.324356  407330 cri.go:89] found id: ""
	I1210 06:38:12.324370  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.324377  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:12.324383  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:12.324441  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:12.355846  407330 cri.go:89] found id: ""
	I1210 06:38:12.355876  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.355883  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:12.355892  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:12.355903  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:12.426588  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:12.426608  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:12.446044  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:12.446061  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:12.519015  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:12.508422   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.508965   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513107   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513691   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.515195   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:12.508422   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.508965   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513107   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513691   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.515195   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:12.519025  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:12.519036  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:12.595463  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:12.595494  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:15.126222  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:15.136973  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:15.137050  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:15.168527  407330 cri.go:89] found id: ""
	I1210 06:38:15.168542  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.168549  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:15.168554  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:15.168615  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:15.195472  407330 cri.go:89] found id: ""
	I1210 06:38:15.195488  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.195496  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:15.195501  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:15.195560  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:15.222272  407330 cri.go:89] found id: ""
	I1210 06:38:15.222286  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.222293  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:15.222298  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:15.222359  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:15.252445  407330 cri.go:89] found id: ""
	I1210 06:38:15.252460  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.252473  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:15.252479  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:15.252541  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:15.279037  407330 cri.go:89] found id: ""
	I1210 06:38:15.279056  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.279063  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:15.279069  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:15.279130  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:15.304272  407330 cri.go:89] found id: ""
	I1210 06:38:15.304287  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.304294  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:15.304299  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:15.304358  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:15.329937  407330 cri.go:89] found id: ""
	I1210 06:38:15.329951  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.329958  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:15.329965  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:15.329976  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:15.344908  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:15.344927  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:15.430525  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:15.420038   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.420859   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.422803   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.424594   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.426170   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:15.420038   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.420859   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.422803   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.424594   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.426170   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:15.430538  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:15.430549  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:15.506380  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:15.506403  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:15.535708  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:15.535725  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:18.102529  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:18.114363  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:18.114433  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:18.140986  407330 cri.go:89] found id: ""
	I1210 06:38:18.141000  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.141007  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:18.141012  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:18.141070  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:18.167798  407330 cri.go:89] found id: ""
	I1210 06:38:18.167812  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.167819  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:18.167827  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:18.167883  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:18.194514  407330 cri.go:89] found id: ""
	I1210 06:38:18.194539  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.194547  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:18.194553  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:18.194614  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:18.219929  407330 cri.go:89] found id: ""
	I1210 06:38:18.219943  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.219949  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:18.219955  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:18.220013  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:18.247728  407330 cri.go:89] found id: ""
	I1210 06:38:18.247742  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.247749  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:18.247755  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:18.247814  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:18.274948  407330 cri.go:89] found id: ""
	I1210 06:38:18.274963  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.274971  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:18.274976  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:18.275034  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:18.301159  407330 cri.go:89] found id: ""
	I1210 06:38:18.301173  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.301196  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:18.301204  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:18.301222  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:18.337936  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:18.337955  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:18.404135  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:18.404153  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:18.420644  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:18.420661  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:18.488180  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:18.479576   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.480035   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.481748   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.482513   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.484281   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:18.479576   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.480035   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.481748   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.482513   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.484281   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:18.488199  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:18.488210  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:21.064064  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:21.074224  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:21.074283  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:21.100332  407330 cri.go:89] found id: ""
	I1210 06:38:21.100347  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.100354  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:21.100359  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:21.100416  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:21.128496  407330 cri.go:89] found id: ""
	I1210 06:38:21.128511  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.128518  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:21.128523  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:21.128583  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:21.165661  407330 cri.go:89] found id: ""
	I1210 06:38:21.165675  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.165682  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:21.165687  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:21.165745  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:21.191177  407330 cri.go:89] found id: ""
	I1210 06:38:21.191191  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.191199  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:21.191204  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:21.191262  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:21.217247  407330 cri.go:89] found id: ""
	I1210 06:38:21.217263  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.217270  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:21.217275  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:21.217336  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:21.243649  407330 cri.go:89] found id: ""
	I1210 06:38:21.243663  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.243670  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:21.243675  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:21.243731  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:21.272574  407330 cri.go:89] found id: ""
	I1210 06:38:21.272589  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.272596  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:21.272604  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:21.272615  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:21.336563  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:21.328507   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.329001   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.330691   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.331320   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.332859   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:21.328507   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.329001   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.330691   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.331320   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.332859   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:21.336573  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:21.336583  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:21.419141  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:21.419163  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:21.452486  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:21.452504  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:21.518913  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:21.518934  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:24.035407  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:24.051364  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:24.051491  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:24.079890  407330 cri.go:89] found id: ""
	I1210 06:38:24.079905  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.079913  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:24.079918  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:24.079976  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:24.108058  407330 cri.go:89] found id: ""
	I1210 06:38:24.108072  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.108089  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:24.108094  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:24.108160  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:24.136304  407330 cri.go:89] found id: ""
	I1210 06:38:24.136318  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.136325  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:24.136331  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:24.136388  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:24.166784  407330 cri.go:89] found id: ""
	I1210 06:38:24.166805  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.166813  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:24.166819  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:24.166879  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:24.194254  407330 cri.go:89] found id: ""
	I1210 06:38:24.194270  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.194278  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:24.194283  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:24.194349  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:24.220032  407330 cri.go:89] found id: ""
	I1210 06:38:24.220046  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.220053  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:24.220058  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:24.220125  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:24.249252  407330 cri.go:89] found id: ""
	I1210 06:38:24.249267  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.249275  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:24.249282  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:24.249301  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:24.332782  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:24.332809  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:24.363293  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:24.363313  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:24.439310  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:24.439334  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:24.454866  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:24.454883  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:24.518646  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:24.510636   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.511199   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.512759   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.513269   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.514934   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:24.510636   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.511199   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.512759   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.513269   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.514934   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:27.018916  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:27.029680  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:27.029748  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:27.057853  407330 cri.go:89] found id: ""
	I1210 06:38:27.057868  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.057876  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:27.057881  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:27.057943  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:27.088489  407330 cri.go:89] found id: ""
	I1210 06:38:27.088504  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.088512  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:27.088517  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:27.088576  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:27.114135  407330 cri.go:89] found id: ""
	I1210 06:38:27.114150  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.114158  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:27.114163  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:27.114222  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:27.144417  407330 cri.go:89] found id: ""
	I1210 06:38:27.144431  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.144438  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:27.144443  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:27.144502  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:27.170599  407330 cri.go:89] found id: ""
	I1210 06:38:27.170613  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.170621  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:27.170626  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:27.170704  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:27.196493  407330 cri.go:89] found id: ""
	I1210 06:38:27.196508  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.196516  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:27.196521  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:27.196577  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:27.222440  407330 cri.go:89] found id: ""
	I1210 06:38:27.222455  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.222462  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:27.222469  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:27.222480  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:27.288558  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:27.288578  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:27.304274  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:27.304290  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:27.370398  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:27.361823   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.362522   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364129   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364518   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.366357   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:27.361823   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.362522   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364129   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364518   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.366357   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:27.370408  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:27.370419  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:27.458800  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:27.458821  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:29.988954  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:29.999798  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:29.999864  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:30.095338  407330 cri.go:89] found id: ""
	I1210 06:38:30.095356  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.095364  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:30.095370  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:30.095440  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:30.129132  407330 cri.go:89] found id: ""
	I1210 06:38:30.129148  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.129156  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:30.129162  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:30.129271  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:30.157101  407330 cri.go:89] found id: ""
	I1210 06:38:30.157117  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.157124  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:30.157130  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:30.157224  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:30.184791  407330 cri.go:89] found id: ""
	I1210 06:38:30.184806  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.184814  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:30.184819  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:30.184885  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:30.211932  407330 cri.go:89] found id: ""
	I1210 06:38:30.211958  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.211966  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:30.211971  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:30.212041  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:30.238373  407330 cri.go:89] found id: ""
	I1210 06:38:30.238398  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.238407  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:30.238413  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:30.238479  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:30.266144  407330 cri.go:89] found id: ""
	I1210 06:38:30.266159  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.266167  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:30.266176  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:30.266187  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:30.337549  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:30.337570  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:30.353715  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:30.353731  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:30.430797  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:30.422887   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.423661   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425295   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425615   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.427098   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:30.422887   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.423661   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425295   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425615   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.427098   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:30.430808  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:30.430821  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:30.510900  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:30.510921  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:33.040458  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:33.051069  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:33.051132  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:33.081117  407330 cri.go:89] found id: ""
	I1210 06:38:33.081131  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.081138  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:33.081144  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:33.081232  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:33.110972  407330 cri.go:89] found id: ""
	I1210 06:38:33.110986  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.110993  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:33.110998  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:33.111055  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:33.136083  407330 cri.go:89] found id: ""
	I1210 06:38:33.136098  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.136104  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:33.136110  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:33.136170  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:33.162539  407330 cri.go:89] found id: ""
	I1210 06:38:33.162554  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.162561  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:33.162567  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:33.162628  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:33.192025  407330 cri.go:89] found id: ""
	I1210 06:38:33.192039  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.192047  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:33.192053  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:33.192114  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:33.217529  407330 cri.go:89] found id: ""
	I1210 06:38:33.217544  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.217562  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:33.217568  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:33.217637  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:33.242901  407330 cri.go:89] found id: ""
	I1210 06:38:33.242916  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.242923  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:33.242931  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:33.242942  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:33.311877  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:33.311897  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:33.327423  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:33.327438  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:33.395423  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:33.386462   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.387346   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.388905   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.389556   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.391613   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:33.386462   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.387346   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.388905   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.389556   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.391613   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:33.395434  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:33.395444  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:33.477529  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:33.477551  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:36.008120  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:36.021683  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:36.021745  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:36.049460  407330 cri.go:89] found id: ""
	I1210 06:38:36.049475  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.049482  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:36.049487  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:36.049560  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:36.076929  407330 cri.go:89] found id: ""
	I1210 06:38:36.076944  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.076951  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:36.076956  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:36.077017  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:36.103193  407330 cri.go:89] found id: ""
	I1210 06:38:36.103208  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.103214  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:36.103219  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:36.103285  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:36.129995  407330 cri.go:89] found id: ""
	I1210 06:38:36.130009  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.130024  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:36.130029  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:36.130087  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:36.156753  407330 cri.go:89] found id: ""
	I1210 06:38:36.156781  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.156789  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:36.156794  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:36.156857  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:36.188439  407330 cri.go:89] found id: ""
	I1210 06:38:36.188453  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.188461  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:36.188466  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:36.188525  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:36.214278  407330 cri.go:89] found id: ""
	I1210 06:38:36.214293  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.214300  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:36.214309  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:36.214321  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:36.280730  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:36.280750  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:36.296203  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:36.296220  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:36.364197  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:36.355593   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.356352   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358159   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358872   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.360561   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:36.355593   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.356352   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358159   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358872   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.360561   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:36.364209  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:36.364222  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:36.458076  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:36.458097  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:38.987911  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:38.998557  407330 kubeadm.go:602] duration metric: took 4m3.870918207s to restartPrimaryControlPlane
	W1210 06:38:38.998620  407330 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 06:38:38.998704  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 06:38:39.409934  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:38:39.423184  407330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:38:39.431304  407330 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:38:39.431358  407330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:38:39.439341  407330 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:38:39.439350  407330 kubeadm.go:158] found existing configuration files:
	
	I1210 06:38:39.439401  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:38:39.447538  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:38:39.447592  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:38:39.454886  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:38:39.462719  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:38:39.462778  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:38:39.470357  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:38:39.477894  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:38:39.477950  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:38:39.485341  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:38:39.493235  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:38:39.493292  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:38:39.500743  407330 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:38:39.538320  407330 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:38:39.538555  407330 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:38:39.610131  407330 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:38:39.610196  407330 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:38:39.610230  407330 kubeadm.go:319] OS: Linux
	I1210 06:38:39.610281  407330 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:38:39.610328  407330 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:38:39.610374  407330 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:38:39.610421  407330 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:38:39.610468  407330 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:38:39.610517  407330 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:38:39.610561  407330 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:38:39.610608  407330 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:38:39.610653  407330 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:38:39.676087  407330 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:38:39.676189  407330 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:38:39.676279  407330 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:38:39.683789  407330 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:38:39.689387  407330 out.go:252]   - Generating certificates and keys ...
	I1210 06:38:39.689490  407330 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:38:39.689554  407330 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:38:39.689629  407330 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:38:39.689689  407330 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:38:39.689759  407330 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:38:39.689811  407330 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:38:39.689904  407330 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:38:39.689978  407330 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:38:39.690060  407330 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:38:39.690139  407330 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:38:39.690176  407330 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:38:39.690241  407330 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:38:40.131783  407330 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:38:40.503719  407330 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:38:40.658362  407330 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:38:41.256208  407330 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:38:41.407412  407330 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:38:41.408125  407330 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:38:41.410853  407330 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:38:41.414436  407330 out.go:252]   - Booting up control plane ...
	I1210 06:38:41.414546  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:38:41.414623  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:38:41.414696  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:38:41.431657  407330 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:38:41.431964  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:38:41.440211  407330 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:38:41.440329  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:38:41.440568  407330 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:38:41.565122  407330 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:38:41.565287  407330 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:42:41.565436  407330 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000253721s
	I1210 06:42:41.565465  407330 kubeadm.go:319] 
	I1210 06:42:41.565522  407330 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:42:41.565554  407330 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:42:41.565658  407330 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:42:41.565663  407330 kubeadm.go:319] 
	I1210 06:42:41.565766  407330 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:42:41.565797  407330 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:42:41.565827  407330 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:42:41.565830  407330 kubeadm.go:319] 
	I1210 06:42:41.570718  407330 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:42:41.571209  407330 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:42:41.571330  407330 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:42:41.571595  407330 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:42:41.571607  407330 kubeadm.go:319] 
	I1210 06:42:41.571752  407330 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:42:41.571857  407330 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000253721s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:42:41.571950  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 06:42:41.983114  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:42:41.996619  407330 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:42:41.996677  407330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:42:42.015710  407330 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:42:42.015721  407330 kubeadm.go:158] found existing configuration files:
	
	I1210 06:42:42.015783  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:42:42.031380  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:42:42.031448  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:42:42.040300  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:42:42.049113  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:42:42.049177  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:42:42.057272  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:42:42.066509  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:42:42.066573  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:42:42.076663  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:42:42.086749  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:42:42.086829  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:42:42.096582  407330 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:42:42.144385  407330 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:42:42.144469  407330 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:42:42.248727  407330 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:42:42.248801  407330 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:42:42.248835  407330 kubeadm.go:319] OS: Linux
	I1210 06:42:42.248888  407330 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:42:42.248946  407330 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:42:42.249004  407330 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:42:42.249052  407330 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:42:42.249117  407330 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:42:42.249198  407330 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:42:42.249245  407330 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:42:42.249306  407330 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:42:42.249359  407330 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:42:42.316721  407330 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:42:42.316825  407330 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:42:42.316916  407330 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:42:42.325666  407330 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:42:42.330985  407330 out.go:252]   - Generating certificates and keys ...
	I1210 06:42:42.331095  407330 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:42:42.331182  407330 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:42:42.331258  407330 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:42:42.331331  407330 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:42:42.331424  407330 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:42:42.331487  407330 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:42:42.331560  407330 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:42:42.331637  407330 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:42:42.331721  407330 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:42:42.331801  407330 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:42:42.331847  407330 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:42:42.331912  407330 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:42:42.541750  407330 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:42:43.048349  407330 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:42:43.167759  407330 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:42:43.323314  407330 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:42:43.407090  407330 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:42:43.408333  407330 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:42:43.412234  407330 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:42:43.415621  407330 out.go:252]   - Booting up control plane ...
	I1210 06:42:43.415734  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:42:43.415811  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:42:43.416436  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:42:43.431439  407330 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:42:43.431813  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:42:43.438586  407330 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:42:43.438900  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:42:43.438951  407330 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:42:43.563199  407330 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:42:43.563333  407330 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:46:43.563419  407330 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000308988s
	I1210 06:46:43.563446  407330 kubeadm.go:319] 
	I1210 06:46:43.563502  407330 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:46:43.563534  407330 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:46:43.563637  407330 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:46:43.563641  407330 kubeadm.go:319] 
	I1210 06:46:43.563744  407330 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:46:43.563775  407330 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:46:43.563804  407330 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:46:43.563807  407330 kubeadm.go:319] 
	I1210 06:46:43.567965  407330 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:46:43.568389  407330 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:46:43.568496  407330 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:46:43.568730  407330 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:46:43.568734  407330 kubeadm.go:319] 
	I1210 06:46:43.568801  407330 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:46:43.568851  407330 kubeadm.go:403] duration metric: took 12m8.481939807s to StartCluster
	I1210 06:46:43.568881  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:46:43.568941  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:46:43.595798  407330 cri.go:89] found id: ""
	I1210 06:46:43.595831  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.595854  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:46:43.595860  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:46:43.595925  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:46:43.621092  407330 cri.go:89] found id: ""
	I1210 06:46:43.621107  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.621114  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:46:43.621123  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:46:43.621181  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:46:43.646506  407330 cri.go:89] found id: ""
	I1210 06:46:43.646520  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.646528  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:46:43.646533  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:46:43.646593  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:46:43.671975  407330 cri.go:89] found id: ""
	I1210 06:46:43.671990  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.671997  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:46:43.672003  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:46:43.672059  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:46:43.698910  407330 cri.go:89] found id: ""
	I1210 06:46:43.698925  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.698932  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:46:43.698937  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:46:43.698997  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:46:43.727644  407330 cri.go:89] found id: ""
	I1210 06:46:43.727660  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.727667  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:46:43.727672  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:46:43.727732  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:46:43.752849  407330 cri.go:89] found id: ""
	I1210 06:46:43.752864  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.752871  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:46:43.752879  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:46:43.752889  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:46:43.818161  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:46:43.818181  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:46:43.833400  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:46:43.833417  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:46:43.902591  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:46:43.894870   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.895546   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897128   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897673   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.899158   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:46:43.894870   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.895546   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897128   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897673   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.899158   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:46:43.902602  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:46:43.902614  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:46:43.975424  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:46:43.975445  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 06:46:44.022327  407330 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:46:44.022377  407330 out.go:285] * 
	W1210 06:46:44.022442  407330 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:46:44.022452  407330 out.go:285] * 
	W1210 06:46:44.024584  407330 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:46:44.031496  407330 out.go:203] 
	W1210 06:46:44.034389  407330 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:46:44.034453  407330 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:46:44.034475  407330 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:46:44.037811  407330 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914305234Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914347581Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914410941Z" level=info msg="Create NRI interface"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914519907Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914528243Z" level=info msg="runtime interface created"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914540707Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914547246Z" level=info msg="runtime interface starting up..."
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914553523Z" level=info msg="starting plugins..."
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914566389Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914635518Z" level=info msg="No systemd watchdog enabled"
	Dec 10 06:34:32 functional-253997 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.679749304Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=256aed1f-deb7-4ef3-85cd-131eefce5f31 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.680508073Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=d66c85ac-bdac-47c8-b0cb-0b9c6495c2c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.681012677Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=9d08e49c-548c-44b3-98b1-7f3a5851a031 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.681572306Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=0bc6e3be-4b4d-4362-bc99-b8372d06365e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.681969496Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2f86c405-f63c-4d07-a2ec-618b9449eabe name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.682410707Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f71d0106-3216-4008-9111-b1a84be0126f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.682849883Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=c187c18f-0638-4353-a242-3d51d64c2a33 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.320018913Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=2592a99c-52fd-46d2-9cab-44207ad0e0c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.321028729Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=f6ff0057-6b99-4df5-9aca-bca9adc94791 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.321868157Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=9722b5ff-a4b3-445c-b3c4-2f4e55341b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.322453627Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=b61d672b-0949-4316-8a7f-4071c86b03d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.322949111Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=450c82e8-dfb9-4142-8b40-08c4b80c0ef4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.32354821Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=0920e0ed-8fd6-46c5-8404-88b74f388f67 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.324043932Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=51273fc0-dd5c-4357-b908-13dc96e1efa4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:46:47.708501   21986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:47.708918   21986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:47.710527   21986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:47.710859   21986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:47.712370   21986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 03:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015165] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514331] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032961] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807794] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.310854] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 04:51] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000001f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000bd7dad86{9P.session} n=00000000db0bc116
	[  +0.001142] FS-Cache: O-key=[10] '34323936323939323530'
	[  +0.000773] FS-Cache: N-cookie c=00000020 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000bd7dad86{9P.session} n=00000000040fb2e3
	[  +0.001079] FS-Cache: N-key=[10] '34323936323939323530'
	[Dec10 04:56] hrtimer: interrupt took 8286122 ns
	[Dec10 06:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 06:10] overlayfs: idmapped layers are currently not supported
	[ +21.611451] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 06:15] overlayfs: idmapped layers are currently not supported
	[Dec10 06:16] overlayfs: idmapped layers are currently not supported
	[Dec10 06:19] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:46:47 up  3:29,  0 user,  load average: 0.09, 0.13, 0.44
	Linux functional-253997 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:46:44 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:46:45 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 642.
	Dec 10 06:46:45 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:46:45 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:46:45 functional-253997 kubelet[21859]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:46:45 functional-253997 kubelet[21859]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:46:45 functional-253997 kubelet[21859]: E1210 06:46:45.696929   21859 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:46:45 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:46:45 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:46:46 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 643.
	Dec 10 06:46:46 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:46:46 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:46:46 functional-253997 kubelet[21878]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:46:46 functional-253997 kubelet[21878]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:46:46 functional-253997 kubelet[21878]: E1210 06:46:46.373009   21878 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:46:46 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:46:46 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:46:47 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 644.
	Dec 10 06:46:47 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:46:47 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:46:47 functional-253997 kubelet[21901]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:46:47 functional-253997 kubelet[21901]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:46:47 functional-253997 kubelet[21901]: E1210 06:46:47.205666   21901 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:46:47 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:46:47 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997: exit status 2 (366.433743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-253997" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (2.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-253997 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-253997 apply -f testdata/invalidsvc.yaml: exit status 1 (66.242739ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-253997 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (1.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-253997 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-253997 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-253997 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-253997 --alsologtostderr -v=1] stderr:
I1210 06:48:56.936934  424768 out.go:360] Setting OutFile to fd 1 ...
I1210 06:48:56.937120  424768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:48:56.937128  424768 out.go:374] Setting ErrFile to fd 2...
I1210 06:48:56.937133  424768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:48:56.937470  424768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
I1210 06:48:56.937738  424768 mustload.go:66] Loading cluster: functional-253997
I1210 06:48:56.938192  424768 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:48:56.938680  424768 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
I1210 06:48:56.958733  424768 host.go:66] Checking if "functional-253997" exists ...
I1210 06:48:56.959094  424768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 06:48:57.017018  424768 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:48:57.006844578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1210 06:48:57.017180  424768 api_server.go:166] Checking apiserver status ...
I1210 06:48:57.017285  424768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1210 06:48:57.017332  424768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
I1210 06:48:57.035884  424768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
W1210 06:48:57.147052  424768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1210 06:48:57.150248  424768 out.go:179] * The control-plane node functional-253997 apiserver is not running: (state=Stopped)
I1210 06:48:57.153177  424768 out.go:179]   To start a cluster, run: "minikube start -p functional-253997"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-253997
helpers_test.go:244: (dbg) docker inspect functional-253997:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	        "Created": "2025-12-10T06:19:33.832297734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 395175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:19:33.914699563Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hosts",
	        "LogPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7-json.log",
	        "Name": "/functional-253997",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-253997:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-253997",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	                "LowerDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-253997",
	                "Source": "/var/lib/docker/volumes/functional-253997/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-253997",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-253997",
	                "name.minikube.sigs.k8s.io": "functional-253997",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "484c42edbd556e17e2c5a836571d3275bc9cbd157b70c100e08a5b73322a62e6",
	            "SandboxKey": "/var/run/docker/netns/484c42edbd55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-253997": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ba:52:c5:6e:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fc5aadc020d5981cfdab05726de722ae81d653cbc30b66014568069657370fe7",
	                    "EndpointID": "9ecd4882ea5c9fe5581b68ad15a0a2f450e34777aee426fcc158008c81076e5a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-253997",
	                        "256e059cdf1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997: exit status 2 (324.122753ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-253997 service hello-node --url                                                                                                          │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ ssh       │ functional-253997 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ mount     │ -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2217133637/001:/mount-9p --alsologtostderr -v=1              │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ ssh       │ functional-253997 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh       │ functional-253997 ssh -- ls -la /mount-9p                                                                                                           │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh       │ functional-253997 ssh cat /mount-9p/test-1765349326704343820                                                                                        │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh       │ functional-253997 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ ssh       │ functional-253997 ssh sudo umount -f /mount-9p                                                                                                      │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh       │ functional-253997 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ mount     │ -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1572231734/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ ssh       │ functional-253997 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh       │ functional-253997 ssh -- ls -la /mount-9p                                                                                                           │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh       │ functional-253997 ssh sudo umount -f /mount-9p                                                                                                      │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ mount     │ -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4096302824/001:/mount1 --alsologtostderr -v=1                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ ssh       │ functional-253997 ssh findmnt -T /mount1                                                                                                            │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ mount     │ -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4096302824/001:/mount2 --alsologtostderr -v=1                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ mount     │ -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4096302824/001:/mount3 --alsologtostderr -v=1                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ ssh       │ functional-253997 ssh findmnt -T /mount1                                                                                                            │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh       │ functional-253997 ssh findmnt -T /mount2                                                                                                            │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh       │ functional-253997 ssh findmnt -T /mount3                                                                                                            │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ mount     │ -p functional-253997 --kill=true                                                                                                                    │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ start     │ -p functional-253997 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ start     │ -p functional-253997 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ start     │ -p functional-253997 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                   │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-253997 --alsologtostderr -v=1                                                                                      │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:48:56
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:48:56.685450  424691 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:48:56.685567  424691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:48:56.685574  424691 out.go:374] Setting ErrFile to fd 2...
	I1210 06:48:56.685579  424691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:48:56.686200  424691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:48:56.686646  424691 out.go:368] Setting JSON to false
	I1210 06:48:56.687478  424691 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12689,"bootTime":1765336648,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:48:56.687545  424691 start.go:143] virtualization:  
	I1210 06:48:56.690677  424691 out.go:179] * [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:48:56.694538  424691 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:48:56.694730  424691 notify.go:221] Checking for updates...
	I1210 06:48:56.700184  424691 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:48:56.703110  424691 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:48:56.706066  424691 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:48:56.709290  424691 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:48:56.712264  424691 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:48:56.715707  424691 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:48:56.716347  424691 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:48:56.743289  424691 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:48:56.743441  424691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:48:56.803116  424691 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:48:56.793157147 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:48:56.803230  424691 docker.go:319] overlay module found
	I1210 06:48:56.806465  424691 out.go:179] * Using the docker driver based on existing profile
	I1210 06:48:56.809442  424691 start.go:309] selected driver: docker
	I1210 06:48:56.809468  424691 start.go:927] validating driver "docker" against &{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:48:56.809571  424691 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:48:56.809682  424691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:48:56.863715  424691 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:48:56.854522382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:48:56.864156  424691 cni.go:84] Creating CNI manager for ""
	I1210 06:48:56.864220  424691 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:48:56.864260  424691 start.go:353] cluster config:
	{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:48:56.867300  424691 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914305234Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914347581Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914410941Z" level=info msg="Create NRI interface"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914519907Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914528243Z" level=info msg="runtime interface created"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914540707Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914547246Z" level=info msg="runtime interface starting up..."
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914553523Z" level=info msg="starting plugins..."
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914566389Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914635518Z" level=info msg="No systemd watchdog enabled"
	Dec 10 06:34:32 functional-253997 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.679749304Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=256aed1f-deb7-4ef3-85cd-131eefce5f31 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.680508073Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=d66c85ac-bdac-47c8-b0cb-0b9c6495c2c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.681012677Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=9d08e49c-548c-44b3-98b1-7f3a5851a031 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.681572306Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=0bc6e3be-4b4d-4362-bc99-b8372d06365e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.681969496Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2f86c405-f63c-4d07-a2ec-618b9449eabe name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.682410707Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f71d0106-3216-4008-9111-b1a84be0126f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.682849883Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=c187c18f-0638-4353-a242-3d51d64c2a33 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.320018913Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=2592a99c-52fd-46d2-9cab-44207ad0e0c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.321028729Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=f6ff0057-6b99-4df5-9aca-bca9adc94791 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.321868157Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=9722b5ff-a4b3-445c-b3c4-2f4e55341b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.322453627Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=b61d672b-0949-4316-8a7f-4071c86b03d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.322949111Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=450c82e8-dfb9-4142-8b40-08c4b80c0ef4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.32354821Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=0920e0ed-8fd6-46c5-8404-88b74f388f67 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.324043932Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=51273fc0-dd5c-4357-b908-13dc96e1efa4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:48:58.223648   24097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:48:58.224062   24097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:48:58.225433   24097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:48:58.225904   24097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:48:58.227655   24097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 03:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015165] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514331] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032961] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807794] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.310854] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 04:51] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000001f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000bd7dad86{9P.session} n=00000000db0bc116
	[  +0.001142] FS-Cache: O-key=[10] '34323936323939323530'
	[  +0.000773] FS-Cache: N-cookie c=00000020 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000bd7dad86{9P.session} n=00000000040fb2e3
	[  +0.001079] FS-Cache: N-key=[10] '34323936323939323530'
	[Dec10 04:56] hrtimer: interrupt took 8286122 ns
	[Dec10 06:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 06:10] overlayfs: idmapped layers are currently not supported
	[ +21.611451] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 06:15] overlayfs: idmapped layers are currently not supported
	[Dec10 06:16] overlayfs: idmapped layers are currently not supported
	[Dec10 06:19] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:48:58 up  3:31,  0 user,  load average: 0.52, 0.25, 0.45
	Linux functional-253997 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:48:55 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:48:56 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 816.
	Dec 10 06:48:56 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:56 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:56 functional-253997 kubelet[23972]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:56 functional-253997 kubelet[23972]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:56 functional-253997 kubelet[23972]: E1210 06:48:56.188862   23972 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:48:56 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:48:56 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:48:56 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 817.
	Dec 10 06:48:56 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:56 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:56 functional-253997 kubelet[23985]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:56 functional-253997 kubelet[23985]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:56 functional-253997 kubelet[23985]: E1210 06:48:56.937129   23985 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:48:56 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:48:56 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:48:57 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 818.
	Dec 10 06:48:57 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:57 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:57 functional-253997 kubelet[24015]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:57 functional-253997 kubelet[24015]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:57 functional-253997 kubelet[24015]: E1210 06:48:57.682506   24015 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:48:57 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:48:57 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997: exit status 2 (344.252999ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-253997" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (1.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (3.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 status: exit status 2 (352.322239ms)

                                                
                                                
-- stdout --
	functional-253997
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-253997 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (311.861096ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-253997 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 status -o json: exit status 2 (337.683322ms)

                                                
                                                
-- stdout --
	{"Name":"functional-253997","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-253997 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-253997
helpers_test.go:244: (dbg) docker inspect functional-253997:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	        "Created": "2025-12-10T06:19:33.832297734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 395175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:19:33.914699563Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hosts",
	        "LogPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7-json.log",
	        "Name": "/functional-253997",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-253997:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-253997",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	                "LowerDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-253997",
	                "Source": "/var/lib/docker/volumes/functional-253997/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-253997",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-253997",
	                "name.minikube.sigs.k8s.io": "functional-253997",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "484c42edbd556e17e2c5a836571d3275bc9cbd157b70c100e08a5b73322a62e6",
	            "SandboxKey": "/var/run/docker/netns/484c42edbd55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-253997": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ba:52:c5:6e:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fc5aadc020d5981cfdab05726de722ae81d653cbc30b66014568069657370fe7",
	                    "EndpointID": "9ecd4882ea5c9fe5581b68ad15a0a2f450e34777aee426fcc158008c81076e5a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-253997",
	                        "256e059cdf1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997: exit status 2 (321.377564ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-253997 service list                                                                                                                      │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ service │ functional-253997 service list -o json                                                                                                              │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ service │ functional-253997 service --namespace=default --https --url hello-node                                                                              │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ service │ functional-253997 service hello-node --url --format={{.IP}}                                                                                         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ service │ functional-253997 service hello-node --url                                                                                                          │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ ssh     │ functional-253997 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ mount   │ -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2217133637/001:/mount-9p --alsologtostderr -v=1              │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ ssh     │ functional-253997 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh     │ functional-253997 ssh -- ls -la /mount-9p                                                                                                           │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh     │ functional-253997 ssh cat /mount-9p/test-1765349326704343820                                                                                        │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh     │ functional-253997 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ ssh     │ functional-253997 ssh sudo umount -f /mount-9p                                                                                                      │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh     │ functional-253997 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ mount   │ -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1572231734/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ ssh     │ functional-253997 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh     │ functional-253997 ssh -- ls -la /mount-9p                                                                                                           │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh     │ functional-253997 ssh sudo umount -f /mount-9p                                                                                                      │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ mount   │ -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4096302824/001:/mount1 --alsologtostderr -v=1                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ ssh     │ functional-253997 ssh findmnt -T /mount1                                                                                                            │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ mount   │ -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4096302824/001:/mount2 --alsologtostderr -v=1                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ mount   │ -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4096302824/001:/mount3 --alsologtostderr -v=1                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ ssh     │ functional-253997 ssh findmnt -T /mount1                                                                                                            │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh     │ functional-253997 ssh findmnt -T /mount2                                                                                                            │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh     │ functional-253997 ssh findmnt -T /mount3                                                                                                            │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ mount   │ -p functional-253997 --kill=true                                                                                                                    │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:34:29
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:34:29.186876  407330 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:34:29.187053  407330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:34:29.187058  407330 out.go:374] Setting ErrFile to fd 2...
	I1210 06:34:29.187062  407330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:34:29.187341  407330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:34:29.187713  407330 out.go:368] Setting JSON to false
	I1210 06:34:29.188576  407330 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11822,"bootTime":1765336648,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:34:29.188634  407330 start.go:143] virtualization:  
	I1210 06:34:29.192149  407330 out.go:179] * [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:34:29.195073  407330 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:34:29.195162  407330 notify.go:221] Checking for updates...
	I1210 06:34:29.200831  407330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:34:29.203909  407330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:34:29.206776  407330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:34:29.209617  407330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:34:29.212440  407330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:34:29.215839  407330 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:34:29.215937  407330 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:34:29.239404  407330 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:34:29.239516  407330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:34:29.302303  407330 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:34:29.292878865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:34:29.302405  407330 docker.go:319] overlay module found
	I1210 06:34:29.305588  407330 out.go:179] * Using the docker driver based on existing profile
	I1210 06:34:29.308369  407330 start.go:309] selected driver: docker
	I1210 06:34:29.308379  407330 start.go:927] validating driver "docker" against &{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:34:29.308484  407330 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:34:29.308590  407330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:34:29.367055  407330 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:34:29.35802689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:34:29.367451  407330 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:34:29.367476  407330 cni.go:84] Creating CNI manager for ""
	I1210 06:34:29.367527  407330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:34:29.367575  407330 start.go:353] cluster config:
	{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:34:29.370834  407330 out.go:179] * Starting "functional-253997" primary control-plane node in "functional-253997" cluster
	I1210 06:34:29.373779  407330 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:34:29.376601  407330 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:34:29.379406  407330 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:34:29.379504  407330 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:34:29.398798  407330 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:34:29.398809  407330 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:34:29.439425  407330 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 06:34:29.641198  407330 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 06:34:29.641344  407330 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/config.json ...
	I1210 06:34:29.641548  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:29.641601  407330 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:34:29.641630  407330 start.go:360] acquireMachinesLock for functional-253997: {Name:mkd4a204596bf14d7530a6c5c103756527acdf26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:29.641675  407330 start.go:364] duration metric: took 26.355µs to acquireMachinesLock for "functional-253997"
	I1210 06:34:29.641688  407330 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:34:29.641692  407330 fix.go:54] fixHost starting: 
	I1210 06:34:29.641950  407330 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:34:29.660018  407330 fix.go:112] recreateIfNeeded on functional-253997: state=Running err=<nil>
	W1210 06:34:29.660039  407330 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:34:29.663260  407330 out.go:252] * Updating the running docker "functional-253997" container ...
	I1210 06:34:29.663287  407330 machine.go:94] provisionDockerMachine start ...
	I1210 06:34:29.663366  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:29.683378  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:29.683692  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:29.683698  407330 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:34:29.821832  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:29.837224  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:34:29.837239  407330 ubuntu.go:182] provisioning hostname "functional-253997"
	I1210 06:34:29.837320  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:29.868971  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:29.869301  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:29.869310  407330 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-253997 && echo "functional-253997" | sudo tee /etc/hostname
	I1210 06:34:29.986840  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:30.112009  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:34:30.112104  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:30.132596  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:30.132908  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:30.132923  407330 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-253997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-253997/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-253997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:34:30.208840  407330 cache.go:107] acquiring lock: {Name:mk9268471d41d450748d4b0133d1a043378776f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.208835  407330 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.208914  407330 cache.go:107] acquiring lock: {Name:mk8250dc655f821bf2674ed77a7683798ede4f4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.208957  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:34:30.208967  407330 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 138.989µs
	I1210 06:34:30.208975  407330 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:34:30.208986  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:34:30.209001  407330 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 97.733µs
	I1210 06:34:30.208999  407330 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209007  407330 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:34:30.209031  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:34:30.209036  407330 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.599µs
	I1210 06:34:30.209024  407330 cache.go:107] acquiring lock: {Name:mk1c0334e51772e4f6b3429cc05b91ec19d54b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209041  407330 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:34:30.209051  407330 cache.go:107] acquiring lock: {Name:mkc2831727d74e5fadb1c00bd89f0236b385200e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209067  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:34:30.209072  407330 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 53.268µs
	I1210 06:34:30.209089  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:34:30.209088  407330 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:34:30.209095  407330 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 45.753µs
	I1210 06:34:30.209100  407330 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:34:30.209108  407330 cache.go:107] acquiring lock: {Name:mk7e052900e41419dc78d3311f27db508a8f64b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209102  407330 cache.go:107] acquiring lock: {Name:mk41aaba55a3296a937cf380d44dc9920df69546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209134  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:34:30.209138  407330 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 31.27µs
	I1210 06:34:30.209143  407330 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:34:30.209145  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:34:30.209151  407330 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 50.536µs
	I1210 06:34:30.209155  407330 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:34:30.209160  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:34:30.209163  407330 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 351.676µs
	I1210 06:34:30.209168  407330 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:34:30.209180  407330 cache.go:87] Successfully saved all images to host disk.
	I1210 06:34:30.290041  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:34:30.290057  407330 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-362392/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-362392/.minikube}
	I1210 06:34:30.290077  407330 ubuntu.go:190] setting up certificates
	I1210 06:34:30.290086  407330 provision.go:84] configureAuth start
	I1210 06:34:30.290163  407330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:34:30.308042  407330 provision.go:143] copyHostCerts
	I1210 06:34:30.308132  407330 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem, removing ...
	I1210 06:34:30.308140  407330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 06:34:30.308215  407330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem (1078 bytes)
	I1210 06:34:30.308356  407330 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem, removing ...
	I1210 06:34:30.308366  407330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 06:34:30.308393  407330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem (1123 bytes)
	I1210 06:34:30.308451  407330 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem, removing ...
	I1210 06:34:30.308454  407330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 06:34:30.308477  407330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem (1675 bytes)
	I1210 06:34:30.308526  407330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem org=jenkins.functional-253997 san=[127.0.0.1 192.168.49.2 functional-253997 localhost minikube]
	I1210 06:34:30.594902  407330 provision.go:177] copyRemoteCerts
	I1210 06:34:30.594965  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:34:30.595003  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:30.611740  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:30.721082  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:34:30.738821  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:34:30.756666  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:34:30.774292  407330 provision.go:87] duration metric: took 484.176925ms to configureAuth
	I1210 06:34:30.774310  407330 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:34:30.774512  407330 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:34:30.774629  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:30.792842  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:30.793168  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:30.793179  407330 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:34:31.164456  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:34:31.164470  407330 machine.go:97] duration metric: took 1.501175708s to provisionDockerMachine
	I1210 06:34:31.164497  407330 start.go:293] postStartSetup for "functional-253997" (driver="docker")
	I1210 06:34:31.164510  407330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:34:31.164571  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:34:31.164607  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.185147  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.293395  407330 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:34:31.296969  407330 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:34:31.296987  407330 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:34:31.296998  407330 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/addons for local assets ...
	I1210 06:34:31.297053  407330 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/files for local assets ...
	I1210 06:34:31.297133  407330 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> 3642652.pem in /etc/ssl/certs
	I1210 06:34:31.297238  407330 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts -> hosts in /etc/test/nested/copy/364265
	I1210 06:34:31.297285  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364265
	I1210 06:34:31.305181  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:34:31.324368  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts --> /etc/test/nested/copy/364265/hosts (40 bytes)
	I1210 06:34:31.342686  407330 start.go:296] duration metric: took 178.173087ms for postStartSetup
	I1210 06:34:31.342778  407330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:34:31.342817  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.360907  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.462708  407330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:34:31.467744  407330 fix.go:56] duration metric: took 1.826044535s for fixHost
	I1210 06:34:31.467760  407330 start.go:83] releasing machines lock for "functional-253997", held for 1.826077816s
	I1210 06:34:31.467840  407330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:34:31.485284  407330 ssh_runner.go:195] Run: cat /version.json
	I1210 06:34:31.485341  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.485360  407330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:34:31.485410  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.504331  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.505583  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.702850  407330 ssh_runner.go:195] Run: systemctl --version
	I1210 06:34:31.710100  407330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:34:31.751135  407330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:34:31.755552  407330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:34:31.755612  407330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:34:31.763681  407330 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:34:31.763695  407330 start.go:496] detecting cgroup driver to use...
	I1210 06:34:31.763726  407330 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:34:31.763773  407330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:34:31.779177  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:34:31.792657  407330 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:34:31.792726  407330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:34:31.808481  407330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:34:31.821835  407330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:34:31.953412  407330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:34:32.070663  407330 docker.go:234] disabling docker service ...
	I1210 06:34:32.070719  407330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:34:32.089582  407330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:34:32.103903  407330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:34:32.229247  407330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:34:32.354550  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:34:32.368208  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:34:32.383037  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:32.544686  407330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:34:32.544766  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.554538  407330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:34:32.554607  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.563600  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.572445  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.581785  407330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:34:32.589992  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.599257  407330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.607809  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.616790  407330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:34:32.624404  407330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:34:32.631884  407330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:34:32.742959  407330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:34:32.924926  407330 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:34:32.925015  407330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:34:32.931953  407330 start.go:564] Will wait 60s for crictl version
	I1210 06:34:32.932037  407330 ssh_runner.go:195] Run: which crictl
	I1210 06:34:32.936975  407330 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:34:32.972701  407330 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:34:32.972786  407330 ssh_runner.go:195] Run: crio --version
	I1210 06:34:33.008288  407330 ssh_runner.go:195] Run: crio --version
	I1210 06:34:33.045101  407330 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 06:34:33.048270  407330 cli_runner.go:164] Run: docker network inspect functional-253997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:34:33.065511  407330 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:34:33.072736  407330 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 06:34:33.075695  407330 kubeadm.go:884] updating cluster {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:34:33.075981  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:33.225944  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:33.376252  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:33.530247  407330 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:34:33.530325  407330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:34:33.568941  407330 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:34:33.568954  407330 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:34:33.568960  407330 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1210 06:34:33.569060  407330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-253997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:34:33.569145  407330 ssh_runner.go:195] Run: crio config
	I1210 06:34:33.643186  407330 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 06:34:33.643211  407330 cni.go:84] Creating CNI manager for ""
	I1210 06:34:33.643224  407330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:34:33.643242  407330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:34:33.643280  407330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-253997 NodeName:functional-253997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:34:33.643429  407330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-253997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:34:33.643524  407330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:34:33.653419  407330 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:34:33.653495  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:34:33.663141  407330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 06:34:33.678587  407330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:34:33.693949  407330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I1210 06:34:33.710464  407330 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:34:33.714723  407330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:34:33.827439  407330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:34:34.376520  407330 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997 for IP: 192.168.49.2
	I1210 06:34:34.376531  407330 certs.go:195] generating shared ca certs ...
	I1210 06:34:34.376561  407330 certs.go:227] acquiring lock for ca certs: {Name:mk6fdc2cadbd147112797261b2432f4e1e90b685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:34:34.376695  407330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key
	I1210 06:34:34.376739  407330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key
	I1210 06:34:34.376746  407330 certs.go:257] generating profile certs ...
	I1210 06:34:34.376830  407330 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key
	I1210 06:34:34.376883  407330 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key.d56e9423
	I1210 06:34:34.376918  407330 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key
	I1210 06:34:34.377046  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem (1338 bytes)
	W1210 06:34:34.377076  407330 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265_empty.pem, impossibly tiny 0 bytes
	I1210 06:34:34.377083  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:34:34.377112  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:34:34.377138  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:34:34.377165  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem (1675 bytes)
	I1210 06:34:34.377235  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:34:34.377907  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:34:34.400957  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:34:34.422626  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:34:34.444886  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:34:34.463194  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:34:34.485380  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:34:34.504994  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:34:34.523903  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:34:34.542693  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem --> /usr/share/ca-certificates/364265.pem (1338 bytes)
	I1210 06:34:34.560781  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /usr/share/ca-certificates/3642652.pem (1708 bytes)
	I1210 06:34:34.580039  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:34:34.598952  407330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:34:34.612103  407330 ssh_runner.go:195] Run: openssl version
	I1210 06:34:34.618607  407330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.626715  407330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3642652.pem /etc/ssl/certs/3642652.pem
	I1210 06:34:34.634462  407330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.638500  407330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.638572  407330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.680023  407330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:34:34.687891  407330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.695733  407330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:34:34.704338  407330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.708573  407330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.708632  407330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.750214  407330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:34:34.758402  407330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.766563  407330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364265.pem /etc/ssl/certs/364265.pem
	I1210 06:34:34.774837  407330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.779114  407330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.779177  407330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.821136  407330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:34:34.829270  407330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:34:34.833529  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:34:34.876277  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:34:34.917707  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:34:34.959457  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:34:35.001865  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:34:35.044914  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:34:35.086921  407330 kubeadm.go:401] StartCluster: {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:34:35.087016  407330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:34:35.087089  407330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:34:35.117459  407330 cri.go:89] found id: ""
	I1210 06:34:35.117522  407330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:34:35.127607  407330 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:34:35.127629  407330 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:34:35.127685  407330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:34:35.136902  407330 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.137526  407330 kubeconfig.go:125] found "functional-253997" server: "https://192.168.49.2:8441"
	I1210 06:34:35.138779  407330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:34:35.148051  407330 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 06:19:55.285285887 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 06:34:33.703709051 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 06:34:35.148070  407330 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:34:35.148082  407330 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 06:34:35.148140  407330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:34:35.178671  407330 cri.go:89] found id: ""
	I1210 06:34:35.178737  407330 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:34:35.196838  407330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:34:35.205412  407330 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 10 06:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 10 06:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 06:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 10 06:24 /etc/kubernetes/scheduler.conf
	
	I1210 06:34:35.205484  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:34:35.213947  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:34:35.222529  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.222599  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:34:35.230587  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:34:35.239174  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.239260  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:34:35.247436  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:34:35.255726  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.255785  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:34:35.264394  407330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:34:35.273245  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:35.319550  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.241705  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.453815  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.521107  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.566051  407330 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:34:36.566126  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:37.067292  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:37.566512  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:38.066836  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:38.566899  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:39.066341  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:39.566346  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:40.066332  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:40.566372  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:41.066499  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:41.566268  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:42.066346  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:42.567303  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:43.066665  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:43.567003  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:44.067024  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:44.566335  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:45.066417  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:45.567077  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:46.066880  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:46.567080  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:47.067184  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:47.567178  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:48.066963  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:48.566974  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:49.067037  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:49.566287  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:50.066336  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:50.566364  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:51.067235  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:51.566986  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:52.067009  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:52.567206  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:53.067261  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:53.566344  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:54.066310  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:54.566298  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:55.066264  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:55.567074  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:56.066263  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:56.566336  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:57.066335  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:57.566328  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:58.067273  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:58.566628  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:59.066382  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:59.566689  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:00.067148  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:00.566514  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:01.067178  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:01.566354  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:02.066731  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:02.566399  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:03.066319  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:03.566548  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:04.067174  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:04.566325  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:05.066402  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:05.566911  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:06.066322  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:06.566332  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:07.066357  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:07.566349  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:08.066401  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:08.566901  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:09.066304  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:09.566288  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:10.067048  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:10.566583  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:11.066369  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:11.566359  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:12.066308  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:12.566316  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:13.067242  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:13.566381  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:14.066924  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:14.566356  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:15.066288  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:15.566336  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:16.066320  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:16.566227  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:17.066312  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:17.567213  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:18.067248  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:18.566316  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:19.066386  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:19.566330  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:20.066351  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:20.567009  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:21.066262  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:21.566459  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:22.067279  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:22.567207  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:23.066320  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:23.566322  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:24.066326  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:24.567019  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:25.066297  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:25.566495  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:26.066321  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:26.566348  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:27.066383  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:27.566446  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:28.066328  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:28.566352  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:29.066994  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:29.566974  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:30.067021  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:30.566389  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:31.066477  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:31.567070  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:32.067017  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:32.566317  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:33.066608  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:33.566260  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:34.066340  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:34.566882  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:35.066828  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:35.566890  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:36.066318  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:36.566330  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:36.566414  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:36.592227  407330 cri.go:89] found id: ""
	I1210 06:35:36.592241  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.592248  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:36.592253  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:36.592312  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:36.622028  407330 cri.go:89] found id: ""
	I1210 06:35:36.622043  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.622051  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:36.622056  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:36.622116  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:36.648208  407330 cri.go:89] found id: ""
	I1210 06:35:36.648226  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.648234  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:36.648240  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:36.648298  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:36.674377  407330 cri.go:89] found id: ""
	I1210 06:35:36.674397  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.674405  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:36.674410  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:36.674471  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:36.699772  407330 cri.go:89] found id: ""
	I1210 06:35:36.699787  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.699794  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:36.699801  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:36.699864  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:36.724815  407330 cri.go:89] found id: ""
	I1210 06:35:36.724830  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.724838  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:36.724843  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:36.724900  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:36.750775  407330 cri.go:89] found id: ""
	I1210 06:35:36.750791  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.750798  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:36.750806  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:36.750820  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:36.820446  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:36.820465  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:36.835955  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:36.835970  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:36.903411  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:36.895055   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.895887   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.897562   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.898101   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.899639   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:36.895055   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.895887   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.897562   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.898101   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.899639   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:36.903424  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:36.903435  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:36.979747  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:36.979768  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:39.514581  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:39.524909  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:39.524970  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:39.550102  407330 cri.go:89] found id: ""
	I1210 06:35:39.550116  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.550124  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:39.550129  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:39.550187  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:39.576588  407330 cri.go:89] found id: ""
	I1210 06:35:39.576602  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.576619  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:39.576624  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:39.576690  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:39.603288  407330 cri.go:89] found id: ""
	I1210 06:35:39.603303  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.603310  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:39.603315  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:39.603373  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:39.632338  407330 cri.go:89] found id: ""
	I1210 06:35:39.632353  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.632360  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:39.632365  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:39.632420  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:39.657752  407330 cri.go:89] found id: ""
	I1210 06:35:39.657767  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.657773  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:39.657779  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:39.657844  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:39.683212  407330 cri.go:89] found id: ""
	I1210 06:35:39.683226  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.683234  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:39.683240  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:39.683300  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:39.708413  407330 cri.go:89] found id: ""
	I1210 06:35:39.708437  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.708445  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:39.708453  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:39.708464  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:39.775637  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:39.775659  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:39.791086  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:39.791102  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:39.857652  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:39.849379   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.849925   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.851713   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.852318   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.853965   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:39.849379   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.849925   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.851713   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.852318   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.853965   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:39.857663  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:39.857675  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:39.935547  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:39.935569  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:42.469375  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:42.480182  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:42.480240  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:42.506760  407330 cri.go:89] found id: ""
	I1210 06:35:42.506774  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.506781  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:42.506786  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:42.506843  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:42.536234  407330 cri.go:89] found id: ""
	I1210 06:35:42.536249  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.536256  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:42.536261  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:42.536329  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:42.566988  407330 cri.go:89] found id: ""
	I1210 06:35:42.567003  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.567010  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:42.567015  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:42.567076  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:42.592607  407330 cri.go:89] found id: ""
	I1210 06:35:42.592630  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.592638  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:42.592643  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:42.592709  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:42.617649  407330 cri.go:89] found id: ""
	I1210 06:35:42.617664  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.617671  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:42.617676  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:42.617734  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:42.643410  407330 cri.go:89] found id: ""
	I1210 06:35:42.643425  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.643432  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:42.643437  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:42.643503  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:42.669531  407330 cri.go:89] found id: ""
	I1210 06:35:42.669546  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.669553  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:42.669561  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:42.669571  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:42.735924  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:42.735944  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:42.751205  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:42.751229  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:42.816158  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:42.807932   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.808831   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810433   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810980   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.812516   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:42.807932   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.808831   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810433   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810980   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.812516   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:42.816169  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:42.816179  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:42.893021  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:42.893042  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:45.426224  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:45.438079  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:45.438148  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:45.472267  407330 cri.go:89] found id: ""
	I1210 06:35:45.472291  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.472299  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:45.472306  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:45.472384  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:45.502901  407330 cri.go:89] found id: ""
	I1210 06:35:45.502931  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.502939  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:45.502945  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:45.503008  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:45.529442  407330 cri.go:89] found id: ""
	I1210 06:35:45.529458  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.529465  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:45.529470  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:45.529534  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:45.555125  407330 cri.go:89] found id: ""
	I1210 06:35:45.555139  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.555159  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:45.555165  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:45.555243  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:45.580961  407330 cri.go:89] found id: ""
	I1210 06:35:45.580976  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.580994  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:45.580999  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:45.581057  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:45.610965  407330 cri.go:89] found id: ""
	I1210 06:35:45.610980  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.610987  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:45.610993  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:45.611059  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:45.637091  407330 cri.go:89] found id: ""
	I1210 06:35:45.637105  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.637120  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:45.637128  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:45.637137  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:45.715413  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:45.715435  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:45.749154  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:45.749171  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:45.815517  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:45.815543  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:45.831429  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:45.831446  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:45.906374  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:45.898629   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.899395   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.900929   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.901506   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.902971   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:45.898629   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.899395   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.900929   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.901506   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.902971   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:48.406578  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:48.421255  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:48.421324  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:48.447131  407330 cri.go:89] found id: ""
	I1210 06:35:48.447146  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.447153  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:48.447159  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:48.447220  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:48.473099  407330 cri.go:89] found id: ""
	I1210 06:35:48.473122  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.473129  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:48.473134  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:48.473222  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:48.498597  407330 cri.go:89] found id: ""
	I1210 06:35:48.498612  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.498619  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:48.498624  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:48.498681  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:48.523362  407330 cri.go:89] found id: ""
	I1210 06:35:48.523377  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.523384  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:48.523389  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:48.523453  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:48.551807  407330 cri.go:89] found id: ""
	I1210 06:35:48.551821  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.551835  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:48.551840  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:48.551900  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:48.581473  407330 cri.go:89] found id: ""
	I1210 06:35:48.581487  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.581502  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:48.581509  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:48.581565  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:48.607499  407330 cri.go:89] found id: ""
	I1210 06:35:48.607514  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.607521  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:48.607529  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:48.607539  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:48.673753  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:48.673774  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:48.688837  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:48.688853  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:48.751707  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:48.744029   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.744581   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746111   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746590   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.748085   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:48.744029   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.744581   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746111   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746590   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.748085   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:48.751717  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:48.751727  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:48.828663  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:48.828686  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:51.363003  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:51.376217  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:51.376312  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:51.407718  407330 cri.go:89] found id: ""
	I1210 06:35:51.407732  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.407755  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:51.407762  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:51.407874  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:51.444235  407330 cri.go:89] found id: ""
	I1210 06:35:51.444269  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.444286  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:51.444295  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:51.444379  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:51.474869  407330 cri.go:89] found id: ""
	I1210 06:35:51.474883  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.474890  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:51.474895  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:51.474953  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:51.504739  407330 cri.go:89] found id: ""
	I1210 06:35:51.504764  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.504772  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:51.504777  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:51.504846  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:51.532353  407330 cri.go:89] found id: ""
	I1210 06:35:51.532368  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.532375  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:51.532380  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:51.532455  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:51.557565  407330 cri.go:89] found id: ""
	I1210 06:35:51.557579  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.557586  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:51.557591  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:51.557661  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:51.583285  407330 cri.go:89] found id: ""
	I1210 06:35:51.583300  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.583307  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:51.583315  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:51.583325  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:51.613387  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:51.613404  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:51.680028  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:51.680049  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:51.695935  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:51.695952  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:51.759280  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:51.750909   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.751756   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753321   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753933   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.755583   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:51.750909   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.751756   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753321   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753933   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.755583   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:51.759290  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:51.759301  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:54.338519  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:54.348725  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:54.348780  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:54.383598  407330 cri.go:89] found id: ""
	I1210 06:35:54.383626  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.383634  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:54.383639  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:54.383707  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:54.410152  407330 cri.go:89] found id: ""
	I1210 06:35:54.410180  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.410187  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:54.410192  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:54.410264  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:54.438326  407330 cri.go:89] found id: ""
	I1210 06:35:54.438352  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.438360  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:54.438365  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:54.438441  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:54.465850  407330 cri.go:89] found id: ""
	I1210 06:35:54.465864  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.465871  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:54.465876  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:54.465931  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:54.491709  407330 cri.go:89] found id: ""
	I1210 06:35:54.491722  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.491729  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:54.491734  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:54.491790  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:54.523425  407330 cri.go:89] found id: ""
	I1210 06:35:54.523440  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.523447  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:54.523452  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:54.523548  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:54.550380  407330 cri.go:89] found id: ""
	I1210 06:35:54.550394  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.550411  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:54.550438  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:54.550449  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:54.582306  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:54.582324  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:54.647908  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:54.647927  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:54.663750  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:54.663772  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:54.730309  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:54.719958   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.720734   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.722315   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.724777   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.725551   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:54.719958   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.720734   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.722315   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.724777   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.725551   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:54.730320  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:54.730331  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:57.308665  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:57.320319  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:57.320392  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:57.345562  407330 cri.go:89] found id: ""
	I1210 06:35:57.345577  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.345584  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:57.345589  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:57.345647  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:57.371859  407330 cri.go:89] found id: ""
	I1210 06:35:57.371874  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.371897  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:57.371903  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:57.371970  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:57.406362  407330 cri.go:89] found id: ""
	I1210 06:35:57.406377  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.406384  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:57.406389  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:57.406463  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:57.436087  407330 cri.go:89] found id: ""
	I1210 06:35:57.436103  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.436110  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:57.436116  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:57.436187  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:57.465764  407330 cri.go:89] found id: ""
	I1210 06:35:57.465779  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.465786  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:57.465791  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:57.465867  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:57.494039  407330 cri.go:89] found id: ""
	I1210 06:35:57.494065  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.494073  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:57.494078  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:57.494145  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:57.520097  407330 cri.go:89] found id: ""
	I1210 06:35:57.520123  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.520131  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:57.520140  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:57.520151  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:57.586496  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:57.586517  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:57.602111  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:57.602128  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:57.668344  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:57.660356   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.660757   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.662627   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.663077   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.664745   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:57.660356   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.660757   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.662627   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.663077   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.664745   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:57.668356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:57.668367  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:57.746160  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:57.746183  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:00.275712  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:00.321874  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:00.321955  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:00.384327  407330 cri.go:89] found id: ""
	I1210 06:36:00.384343  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.384351  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:00.384357  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:00.384451  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:00.459817  407330 cri.go:89] found id: ""
	I1210 06:36:00.459834  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.459842  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:00.459848  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:00.459916  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:00.497674  407330 cri.go:89] found id: ""
	I1210 06:36:00.497690  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.497698  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:00.497704  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:00.497774  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:00.541499  407330 cri.go:89] found id: ""
	I1210 06:36:00.541516  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.541525  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:00.541531  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:00.541613  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:00.581412  407330 cri.go:89] found id: ""
	I1210 06:36:00.581436  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.581463  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:00.581468  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:00.581541  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:00.610779  407330 cri.go:89] found id: ""
	I1210 06:36:00.610795  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.610802  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:00.610807  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:00.610870  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:00.642543  407330 cri.go:89] found id: ""
	I1210 06:36:00.642559  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.642567  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:00.642575  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:00.642586  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:00.710346  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:00.710367  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:00.725875  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:00.725894  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:00.793058  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:00.784782   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.785552   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787181   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787496   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.789654   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:00.784782   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.785552   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787181   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787496   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.789654   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:00.793071  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:00.793084  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:00.875916  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:00.875944  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:03.406417  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:03.419044  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:03.419120  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:03.447628  407330 cri.go:89] found id: ""
	I1210 06:36:03.447658  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.447666  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:03.447671  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:03.447737  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:03.474253  407330 cri.go:89] found id: ""
	I1210 06:36:03.474266  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.474274  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:03.474279  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:03.474336  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:03.500678  407330 cri.go:89] found id: ""
	I1210 06:36:03.500694  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.500701  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:03.500707  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:03.500768  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:03.528282  407330 cri.go:89] found id: ""
	I1210 06:36:03.528298  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.528306  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:03.528311  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:03.528373  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:03.556656  407330 cri.go:89] found id: ""
	I1210 06:36:03.556670  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.556678  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:03.556683  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:03.556743  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:03.583735  407330 cri.go:89] found id: ""
	I1210 06:36:03.583750  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.583758  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:03.583763  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:03.583819  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:03.609076  407330 cri.go:89] found id: ""
	I1210 06:36:03.609090  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.609097  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:03.609105  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:03.609115  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:03.686817  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:03.686837  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:03.716372  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:03.716389  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:03.784121  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:03.784140  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:03.799951  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:03.799970  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:03.868350  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:03.860456   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.860967   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.862601   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.863024   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.864532   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:03.860456   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.860967   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.862601   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.863024   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.864532   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:06.369008  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:06.379783  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:06.379844  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:06.413424  407330 cri.go:89] found id: ""
	I1210 06:36:06.413438  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.413452  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:06.413457  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:06.413518  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:06.455432  407330 cri.go:89] found id: ""
	I1210 06:36:06.455446  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.455453  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:06.455458  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:06.455518  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:06.484987  407330 cri.go:89] found id: ""
	I1210 06:36:06.485002  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.485011  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:06.485016  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:06.485079  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:06.510864  407330 cri.go:89] found id: ""
	I1210 06:36:06.510879  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.510887  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:06.510892  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:06.510955  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:06.536841  407330 cri.go:89] found id: ""
	I1210 06:36:06.536856  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.536863  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:06.536868  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:06.536928  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:06.563896  407330 cri.go:89] found id: ""
	I1210 06:36:06.563911  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.563918  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:06.563923  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:06.563982  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:06.588959  407330 cri.go:89] found id: ""
	I1210 06:36:06.588973  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.588981  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:06.588988  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:06.588998  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:06.665721  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:06.665743  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:06.694509  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:06.694527  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:06.761392  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:06.761412  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:06.776431  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:06.776448  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:06.839723  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:06.830973   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.831471   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.833376   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.834107   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.835971   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:06.830973   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.831471   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.833376   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.834107   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.835971   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:09.340200  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:09.350423  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:09.350492  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:09.377180  407330 cri.go:89] found id: ""
	I1210 06:36:09.377216  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.377224  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:09.377229  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:09.377296  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:09.408780  407330 cri.go:89] found id: ""
	I1210 06:36:09.408794  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.408810  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:09.408817  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:09.408891  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:09.439014  407330 cri.go:89] found id: ""
	I1210 06:36:09.439028  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.439046  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:09.439051  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:09.439123  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:09.465550  407330 cri.go:89] found id: ""
	I1210 06:36:09.465570  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.465577  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:09.465582  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:09.465640  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:09.495077  407330 cri.go:89] found id: ""
	I1210 06:36:09.495092  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.495099  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:09.495104  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:09.495160  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:09.524259  407330 cri.go:89] found id: ""
	I1210 06:36:09.524283  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.524291  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:09.524296  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:09.524365  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:09.552397  407330 cri.go:89] found id: ""
	I1210 06:36:09.552411  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.552428  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:09.552435  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:09.552445  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:09.617989  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:09.618009  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:09.633375  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:09.633391  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:09.703345  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:09.695844   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.696518   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.697933   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.698390   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.699860   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:09.695844   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.696518   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.697933   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.698390   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.699860   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:09.703356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:09.703368  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:09.780941  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:09.780963  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:12.311981  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:12.322588  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:12.322649  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:12.348408  407330 cri.go:89] found id: ""
	I1210 06:36:12.348423  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.348430  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:12.348436  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:12.348494  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:12.381450  407330 cri.go:89] found id: ""
	I1210 06:36:12.381465  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.381492  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:12.381497  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:12.381565  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:12.421286  407330 cri.go:89] found id: ""
	I1210 06:36:12.421301  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.421309  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:12.421314  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:12.421381  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:12.453573  407330 cri.go:89] found id: ""
	I1210 06:36:12.453598  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.453605  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:12.453611  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:12.453677  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:12.480195  407330 cri.go:89] found id: ""
	I1210 06:36:12.480210  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.480218  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:12.480225  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:12.480290  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:12.505648  407330 cri.go:89] found id: ""
	I1210 06:36:12.505662  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.505669  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:12.505674  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:12.505732  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:12.532083  407330 cri.go:89] found id: ""
	I1210 06:36:12.532097  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.532104  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:12.532112  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:12.532125  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:12.598623  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:12.598646  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:12.614317  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:12.614336  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:12.686805  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:12.678148   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.678809   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.680931   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.681479   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.683127   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:12.678148   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.678809   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.680931   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.681479   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.683127   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:12.686817  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:12.686828  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:12.768698  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:12.768719  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:15.302091  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:15.312582  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:15.312644  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:15.338874  407330 cri.go:89] found id: ""
	I1210 06:36:15.338889  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.338897  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:15.338902  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:15.338962  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:15.365600  407330 cri.go:89] found id: ""
	I1210 06:36:15.365614  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.365621  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:15.365627  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:15.365687  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:15.405324  407330 cri.go:89] found id: ""
	I1210 06:36:15.405339  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.405346  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:15.405352  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:15.405411  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:15.438276  407330 cri.go:89] found id: ""
	I1210 06:36:15.438290  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.438298  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:15.438304  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:15.438362  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:15.465120  407330 cri.go:89] found id: ""
	I1210 06:36:15.465135  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.465142  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:15.465147  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:15.465226  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:15.490880  407330 cri.go:89] found id: ""
	I1210 06:36:15.490894  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.490901  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:15.490906  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:15.490968  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:15.517171  407330 cri.go:89] found id: ""
	I1210 06:36:15.517208  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.517215  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:15.517224  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:15.517235  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:15.580940  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:15.573253   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.573879   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575387   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575909   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.577385   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:15.573253   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.573879   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575387   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575909   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.577385   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:15.580950  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:15.580962  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:15.657832  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:15.657853  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:15.690721  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:15.690738  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:15.755970  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:15.755993  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:18.272507  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:18.282762  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:18.282822  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:18.312952  407330 cri.go:89] found id: ""
	I1210 06:36:18.312966  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.312980  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:18.312986  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:18.313048  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:18.340174  407330 cri.go:89] found id: ""
	I1210 06:36:18.340189  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.340196  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:18.340201  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:18.340260  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:18.365096  407330 cri.go:89] found id: ""
	I1210 06:36:18.365111  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.365118  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:18.365122  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:18.365178  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:18.408189  407330 cri.go:89] found id: ""
	I1210 06:36:18.408203  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.408210  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:18.408215  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:18.408271  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:18.439330  407330 cri.go:89] found id: ""
	I1210 06:36:18.439344  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.439351  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:18.439357  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:18.439413  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:18.471472  407330 cri.go:89] found id: ""
	I1210 06:36:18.471486  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.471493  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:18.471498  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:18.471561  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:18.499541  407330 cri.go:89] found id: ""
	I1210 06:36:18.499555  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.499562  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:18.499569  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:18.499579  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:18.566266  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:18.566288  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:18.581335  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:18.581351  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:18.649633  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:18.642053   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.642777   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644268   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644684   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.646194   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:18.642053   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.642777   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644268   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644684   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.646194   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:18.649644  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:18.649657  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:18.727427  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:18.727447  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:21.256173  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:21.266342  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:21.266401  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:21.291198  407330 cri.go:89] found id: ""
	I1210 06:36:21.291212  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.291219  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:21.291224  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:21.291285  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:21.317809  407330 cri.go:89] found id: ""
	I1210 06:36:21.317823  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.317831  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:21.317836  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:21.317893  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:21.349023  407330 cri.go:89] found id: ""
	I1210 06:36:21.349038  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.349046  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:21.349051  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:21.349112  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:21.377021  407330 cri.go:89] found id: ""
	I1210 06:36:21.377036  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.377043  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:21.377049  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:21.377128  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:21.414828  407330 cri.go:89] found id: ""
	I1210 06:36:21.414843  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.414853  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:21.414858  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:21.414924  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:21.448750  407330 cri.go:89] found id: ""
	I1210 06:36:21.448765  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.448772  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:21.448778  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:21.448836  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:21.475060  407330 cri.go:89] found id: ""
	I1210 06:36:21.475082  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.475089  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:21.475097  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:21.475109  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:21.544320  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:21.544350  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:21.559538  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:21.559554  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:21.623730  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:21.615598   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.616210   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.617863   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.618444   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.620054   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:21.615598   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.616210   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.617863   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.618444   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.620054   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:21.623741  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:21.623754  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:21.703706  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:21.703726  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:24.232360  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:24.242917  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:24.242977  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:24.272666  407330 cri.go:89] found id: ""
	I1210 06:36:24.272681  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.272688  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:24.272693  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:24.272762  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:24.298359  407330 cri.go:89] found id: ""
	I1210 06:36:24.298374  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.298381  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:24.298386  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:24.298448  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:24.324096  407330 cri.go:89] found id: ""
	I1210 06:36:24.324110  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.324117  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:24.324122  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:24.324180  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:24.352195  407330 cri.go:89] found id: ""
	I1210 06:36:24.352210  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.352217  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:24.352223  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:24.352281  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:24.392094  407330 cri.go:89] found id: ""
	I1210 06:36:24.392109  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.392116  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:24.392121  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:24.392180  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:24.433688  407330 cri.go:89] found id: ""
	I1210 06:36:24.433702  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.433716  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:24.433721  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:24.433780  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:24.461088  407330 cri.go:89] found id: ""
	I1210 06:36:24.461103  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.461110  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:24.461118  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:24.461140  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:24.491187  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:24.491203  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:24.557420  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:24.557442  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:24.572719  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:24.572736  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:24.638182  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:24.629865   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.630603   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632247   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632788   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.634516   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:24.629865   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.630603   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632247   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632788   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.634516   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:24.638192  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:24.638204  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:27.215263  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:27.225429  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:27.225490  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:27.250600  407330 cri.go:89] found id: ""
	I1210 06:36:27.250623  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.250630  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:27.250636  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:27.250696  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:27.275244  407330 cri.go:89] found id: ""
	I1210 06:36:27.275258  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.275266  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:27.275271  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:27.275337  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:27.303675  407330 cri.go:89] found id: ""
	I1210 06:36:27.303699  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.303707  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:27.303713  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:27.303779  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:27.329179  407330 cri.go:89] found id: ""
	I1210 06:36:27.329211  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.329219  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:27.329225  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:27.329294  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:27.354254  407330 cri.go:89] found id: ""
	I1210 06:36:27.354269  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.354276  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:27.354282  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:27.354340  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:27.386524  407330 cri.go:89] found id: ""
	I1210 06:36:27.386539  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.386546  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:27.386552  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:27.386608  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:27.419941  407330 cri.go:89] found id: ""
	I1210 06:36:27.419964  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.419972  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:27.419980  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:27.419990  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:27.489413  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:27.489436  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:27.504358  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:27.504375  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:27.572076  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:27.564125   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.564752   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.566500   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.567122   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.568559   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:27.564125   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.564752   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.566500   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.567122   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.568559   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:27.572087  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:27.572097  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:27.652684  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:27.652704  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:30.186931  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:30.198655  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:30.198720  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:30.226217  407330 cri.go:89] found id: ""
	I1210 06:36:30.226239  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.226247  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:30.226252  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:30.226319  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:30.254245  407330 cri.go:89] found id: ""
	I1210 06:36:30.254261  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.254268  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:30.254273  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:30.254331  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:30.282139  407330 cri.go:89] found id: ""
	I1210 06:36:30.282154  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.282162  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:30.282167  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:30.282227  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:30.308968  407330 cri.go:89] found id: ""
	I1210 06:36:30.308992  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.308999  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:30.309005  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:30.309076  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:30.337543  407330 cri.go:89] found id: ""
	I1210 06:36:30.337558  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.337565  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:30.337570  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:30.337630  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:30.366448  407330 cri.go:89] found id: ""
	I1210 06:36:30.366463  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.366477  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:30.366483  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:30.366542  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:30.404619  407330 cri.go:89] found id: ""
	I1210 06:36:30.404641  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.404649  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:30.404656  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:30.404667  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:30.484453  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:30.484481  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:30.499101  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:30.499118  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:30.561567  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:30.553438   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.554141   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.555797   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.556329   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.557890   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:30.553438   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.554141   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.555797   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.556329   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.557890   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:30.561578  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:30.561589  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:30.638801  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:30.638822  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:33.169370  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:33.179597  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:33.179662  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:33.204216  407330 cri.go:89] found id: ""
	I1210 06:36:33.204230  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.204246  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:33.204252  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:33.204309  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:33.229498  407330 cri.go:89] found id: ""
	I1210 06:36:33.229512  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.229519  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:33.229524  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:33.229580  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:33.255490  407330 cri.go:89] found id: ""
	I1210 06:36:33.255505  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.255521  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:33.255527  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:33.255593  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:33.283936  407330 cri.go:89] found id: ""
	I1210 06:36:33.283960  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.283968  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:33.283974  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:33.284052  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:33.308959  407330 cri.go:89] found id: ""
	I1210 06:36:33.308974  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.308984  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:33.308990  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:33.309058  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:33.335830  407330 cri.go:89] found id: ""
	I1210 06:36:33.335853  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.335860  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:33.335866  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:33.335936  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:33.362154  407330 cri.go:89] found id: ""
	I1210 06:36:33.362179  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.362187  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:33.362196  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:33.362208  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:33.410395  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:33.410413  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:33.480770  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:33.480789  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:33.496511  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:33.496527  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:33.563939  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:33.556146   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.556663   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558166   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558668   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.560192   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:33.556146   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.556663   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558166   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558668   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.560192   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:33.563950  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:33.563961  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:36.141828  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:36.152734  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:36.152795  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:36.178688  407330 cri.go:89] found id: ""
	I1210 06:36:36.178703  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.178710  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:36.178716  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:36.178776  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:36.205685  407330 cri.go:89] found id: ""
	I1210 06:36:36.205700  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.205707  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:36.205712  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:36.205771  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:36.231383  407330 cri.go:89] found id: ""
	I1210 06:36:36.231398  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.231411  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:36.231418  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:36.231480  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:36.257291  407330 cri.go:89] found id: ""
	I1210 06:36:36.257316  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.257324  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:36.257329  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:36.257400  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:36.287683  407330 cri.go:89] found id: ""
	I1210 06:36:36.287697  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.287704  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:36.287709  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:36.287767  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:36.313785  407330 cri.go:89] found id: ""
	I1210 06:36:36.313799  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.313807  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:36.313812  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:36.313871  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:36.339325  407330 cri.go:89] found id: ""
	I1210 06:36:36.339339  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.339347  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:36.339356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:36.339369  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:36.421249  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:36.421268  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:36.458225  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:36.458243  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:36.528365  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:36.528384  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:36.544683  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:36.544705  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:36.611624  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:36.602655   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.603473   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605101   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605888   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.607572   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:36.602655   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.603473   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605101   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605888   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.607572   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:39.111891  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:39.122952  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:39.123016  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:39.151788  407330 cri.go:89] found id: ""
	I1210 06:36:39.151817  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.151825  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:39.151831  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:39.151902  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:39.176656  407330 cri.go:89] found id: ""
	I1210 06:36:39.176679  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.176686  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:39.176691  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:39.176759  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:39.203206  407330 cri.go:89] found id: ""
	I1210 06:36:39.203220  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.203227  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:39.203233  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:39.203289  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:39.228848  407330 cri.go:89] found id: ""
	I1210 06:36:39.228862  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.228869  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:39.228875  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:39.228933  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:39.258475  407330 cri.go:89] found id: ""
	I1210 06:36:39.258512  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.258519  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:39.258524  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:39.258589  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:39.283240  407330 cri.go:89] found id: ""
	I1210 06:36:39.283254  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.283261  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:39.283268  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:39.283328  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:39.312591  407330 cri.go:89] found id: ""
	I1210 06:36:39.312604  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.312611  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:39.312619  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:39.312629  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:39.380680  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:39.380703  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:39.397793  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:39.397809  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:39.469117  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:39.460579   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.461325   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463132   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463721   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.465358   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:39.460579   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.461325   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463132   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463721   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.465358   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:39.469128  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:39.469139  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:39.546111  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:39.546131  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:42.076431  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:42.089265  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:42.089335  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:42.121496  407330 cri.go:89] found id: ""
	I1210 06:36:42.121512  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.121520  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:42.121526  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:42.121593  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:42.151688  407330 cri.go:89] found id: ""
	I1210 06:36:42.151704  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.151712  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:42.151717  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:42.151784  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:42.190925  407330 cri.go:89] found id: ""
	I1210 06:36:42.190942  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.190949  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:42.190955  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:42.191063  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:42.225827  407330 cri.go:89] found id: ""
	I1210 06:36:42.225849  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.225857  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:42.225863  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:42.225931  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:42.254453  407330 cri.go:89] found id: ""
	I1210 06:36:42.254467  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.254475  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:42.254480  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:42.254557  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:42.281514  407330 cri.go:89] found id: ""
	I1210 06:36:42.281536  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.281545  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:42.281550  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:42.281615  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:42.309082  407330 cri.go:89] found id: ""
	I1210 06:36:42.309097  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.309105  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:42.309115  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:42.309127  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:42.325376  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:42.325393  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:42.394971  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:42.386397   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.387396   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389262   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389603   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.390932   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:42.386397   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.387396   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389262   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389603   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.390932   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:42.394982  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:42.394993  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:42.480444  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:42.480463  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:42.513077  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:42.513094  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:45.082079  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:45.095928  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:45.096005  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:45.136147  407330 cri.go:89] found id: ""
	I1210 06:36:45.136165  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.136172  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:45.136178  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:45.136321  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:45.171561  407330 cri.go:89] found id: ""
	I1210 06:36:45.171577  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.171584  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:45.171590  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:45.171667  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:45.214225  407330 cri.go:89] found id: ""
	I1210 06:36:45.214243  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.214277  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:45.214282  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:45.214364  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:45.274027  407330 cri.go:89] found id: ""
	I1210 06:36:45.274044  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.274052  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:45.274058  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:45.274128  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:45.321536  407330 cri.go:89] found id: ""
	I1210 06:36:45.321553  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.321561  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:45.321567  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:45.321719  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:45.355270  407330 cri.go:89] found id: ""
	I1210 06:36:45.355285  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.355303  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:45.355310  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:45.355386  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:45.388777  407330 cri.go:89] found id: ""
	I1210 06:36:45.388801  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.388809  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:45.388817  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:45.388827  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:45.478699  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:45.478723  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:45.507903  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:45.507921  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:45.575844  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:45.575864  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:45.591861  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:45.591885  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:45.656312  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:45.648123   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.648663   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650406   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650984   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.652724   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:45.648123   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.648663   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650406   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650984   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.652724   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:48.156556  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:48.166976  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:48.167036  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:48.192782  407330 cri.go:89] found id: ""
	I1210 06:36:48.192807  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.192817  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:48.192824  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:48.192889  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:48.218586  407330 cri.go:89] found id: ""
	I1210 06:36:48.218600  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.218607  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:48.218623  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:48.218682  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:48.244757  407330 cri.go:89] found id: ""
	I1210 06:36:48.244771  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.244778  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:48.244783  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:48.244841  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:48.271671  407330 cri.go:89] found id: ""
	I1210 06:36:48.271685  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.271692  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:48.271697  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:48.271756  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:48.298466  407330 cri.go:89] found id: ""
	I1210 06:36:48.298480  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.298487  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:48.298493  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:48.298603  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:48.324794  407330 cri.go:89] found id: ""
	I1210 06:36:48.324808  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.324825  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:48.324830  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:48.324888  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:48.351036  407330 cri.go:89] found id: ""
	I1210 06:36:48.351051  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.351058  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:48.351065  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:48.351076  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:48.384287  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:48.384303  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:48.462134  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:48.462154  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:48.477421  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:48.477439  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:48.544257  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:48.535925   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.536728   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538380   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538978   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.540777   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:48.535925   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.536728   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538380   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538978   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.540777   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:48.544268  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:48.544279  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:51.122102  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:51.133691  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:51.133753  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:51.161091  407330 cri.go:89] found id: ""
	I1210 06:36:51.161106  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.161113  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:51.161119  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:51.161217  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:51.189850  407330 cri.go:89] found id: ""
	I1210 06:36:51.189865  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.189872  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:51.189877  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:51.189944  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:51.215676  407330 cri.go:89] found id: ""
	I1210 06:36:51.215691  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.215698  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:51.215703  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:51.215763  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:51.241638  407330 cri.go:89] found id: ""
	I1210 06:36:51.241653  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.241660  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:51.241666  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:51.241728  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:51.266737  407330 cri.go:89] found id: ""
	I1210 06:36:51.266752  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.266759  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:51.266764  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:51.266823  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:51.291896  407330 cri.go:89] found id: ""
	I1210 06:36:51.291911  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.291918  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:51.291923  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:51.291982  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:51.317807  407330 cri.go:89] found id: ""
	I1210 06:36:51.317823  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.317830  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:51.317838  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:51.317849  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:51.385260  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:51.385280  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:51.400443  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:51.400459  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:51.479768  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:51.472416   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.472835   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474317   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474686   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.476122   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:51.472416   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.472835   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474317   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474686   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.476122   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:51.479778  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:51.479789  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:51.556275  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:51.556295  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:54.087759  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:54.098770  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:54.098837  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:54.124003  407330 cri.go:89] found id: ""
	I1210 06:36:54.124017  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.124025  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:54.124030  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:54.124091  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:54.150185  407330 cri.go:89] found id: ""
	I1210 06:36:54.150200  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.150207  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:54.150213  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:54.150272  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:54.177121  407330 cri.go:89] found id: ""
	I1210 06:36:54.177135  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.177143  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:54.177148  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:54.177248  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:54.202926  407330 cri.go:89] found id: ""
	I1210 06:36:54.202941  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.202948  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:54.202953  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:54.203013  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:54.232186  407330 cri.go:89] found id: ""
	I1210 06:36:54.232200  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.232215  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:54.232221  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:54.232291  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:54.257570  407330 cri.go:89] found id: ""
	I1210 06:36:54.257584  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.257592  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:54.257597  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:54.257656  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:54.282060  407330 cri.go:89] found id: ""
	I1210 06:36:54.282074  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.282081  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:54.282088  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:54.282099  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:54.347704  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:54.347728  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:54.362634  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:54.362652  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:54.450702  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:54.442872   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.443270   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.444790   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.445114   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.446658   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:54.442872   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.443270   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.444790   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.445114   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.446658   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:54.450713  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:54.450723  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:54.528465  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:54.528487  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:57.060906  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:57.071228  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:57.071304  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:57.096846  407330 cri.go:89] found id: ""
	I1210 06:36:57.096859  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.096867  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:57.096872  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:57.096932  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:57.122828  407330 cri.go:89] found id: ""
	I1210 06:36:57.122845  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.122852  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:57.122858  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:57.122918  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:57.154708  407330 cri.go:89] found id: ""
	I1210 06:36:57.154723  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.154730  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:57.154736  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:57.154798  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:57.181521  407330 cri.go:89] found id: ""
	I1210 06:36:57.181543  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.181550  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:57.181556  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:57.181620  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:57.206722  407330 cri.go:89] found id: ""
	I1210 06:36:57.206736  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.206743  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:57.206749  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:57.206811  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:57.232129  407330 cri.go:89] found id: ""
	I1210 06:36:57.232143  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.232150  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:57.232155  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:57.232212  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:57.258044  407330 cri.go:89] found id: ""
	I1210 06:36:57.258057  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.258064  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:57.258071  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:57.258081  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:57.285624  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:57.285640  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:57.351757  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:57.351778  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:57.367138  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:57.367157  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:57.458560  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:57.450875   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.451498   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453015   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453535   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.454978   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:57.450875   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.451498   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453015   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453535   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.454978   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:57.458571  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:57.458582  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:00.035650  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:00.112450  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:00.112528  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:00.233350  407330 cri.go:89] found id: ""
	I1210 06:37:00.233368  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.233377  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:00.233383  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:00.233454  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:00.328120  407330 cri.go:89] found id: ""
	I1210 06:37:00.328136  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.328144  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:00.328150  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:00.328216  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:00.369964  407330 cri.go:89] found id: ""
	I1210 06:37:00.369981  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.369989  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:00.369995  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:00.370065  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:00.412610  407330 cri.go:89] found id: ""
	I1210 06:37:00.412628  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.412636  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:00.412642  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:00.412717  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:00.458193  407330 cri.go:89] found id: ""
	I1210 06:37:00.458212  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.458220  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:00.458225  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:00.458300  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:00.486825  407330 cri.go:89] found id: ""
	I1210 06:37:00.486840  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.486848  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:00.486853  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:00.486912  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:00.514588  407330 cri.go:89] found id: ""
	I1210 06:37:00.514604  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.514612  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:00.514631  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:00.514643  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:00.544788  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:00.544807  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:00.611036  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:00.611058  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:00.625887  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:00.625904  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:00.692620  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:00.684863   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.685709   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687299   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687611   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.689091   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:00.684863   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.685709   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687299   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687611   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.689091   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:00.692631  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:00.692642  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:03.270067  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:03.280541  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:03.280604  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:03.306695  407330 cri.go:89] found id: ""
	I1210 06:37:03.306710  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.306718  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:03.306724  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:03.306788  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:03.335215  407330 cri.go:89] found id: ""
	I1210 06:37:03.335230  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.335237  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:03.335243  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:03.335302  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:03.366128  407330 cri.go:89] found id: ""
	I1210 06:37:03.366143  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.366150  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:03.366155  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:03.366214  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:03.407867  407330 cri.go:89] found id: ""
	I1210 06:37:03.407883  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.407891  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:03.407896  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:03.407957  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:03.439688  407330 cri.go:89] found id: ""
	I1210 06:37:03.439703  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.439710  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:03.439716  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:03.439776  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:03.470617  407330 cri.go:89] found id: ""
	I1210 06:37:03.470633  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.470640  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:03.470645  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:03.470708  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:03.495476  407330 cri.go:89] found id: ""
	I1210 06:37:03.495491  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.495498  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:03.495506  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:03.495516  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:03.562017  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:03.562037  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:03.577764  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:03.577782  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:03.644175  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:03.636471   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.637208   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.638686   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.639152   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.640632   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:03.636471   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.637208   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.638686   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.639152   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.640632   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:03.644187  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:03.644198  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:03.721903  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:03.721925  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:06.250929  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:06.261704  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:06.261767  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:06.290140  407330 cri.go:89] found id: ""
	I1210 06:37:06.290155  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.290163  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:06.290168  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:06.290226  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:06.315796  407330 cri.go:89] found id: ""
	I1210 06:37:06.315811  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.315819  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:06.315826  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:06.315884  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:06.340906  407330 cri.go:89] found id: ""
	I1210 06:37:06.340920  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.340927  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:06.340932  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:06.340996  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:06.367812  407330 cri.go:89] found id: ""
	I1210 06:37:06.367827  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.367835  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:06.367840  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:06.367899  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:06.401044  407330 cri.go:89] found id: ""
	I1210 06:37:06.401058  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.401065  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:06.401070  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:06.401166  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:06.438778  407330 cri.go:89] found id: ""
	I1210 06:37:06.438799  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.438806  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:06.438811  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:06.438892  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:06.466678  407330 cri.go:89] found id: ""
	I1210 06:37:06.466692  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.466700  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:06.466708  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:06.466718  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:06.544177  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:06.544200  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:06.573010  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:06.573027  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:06.640533  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:06.640553  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:06.656110  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:06.656128  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:06.723670  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:06.715242   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.715893   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.717714   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.718356   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.720062   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:06.715242   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.715893   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.717714   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.718356   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.720062   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:09.224405  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:09.234680  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:09.234741  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:09.260264  407330 cri.go:89] found id: ""
	I1210 06:37:09.260278  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.260285  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:09.260290  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:09.260348  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:09.285806  407330 cri.go:89] found id: ""
	I1210 06:37:09.285823  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.285830  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:09.285836  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:09.285899  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:09.315817  407330 cri.go:89] found id: ""
	I1210 06:37:09.315832  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.315840  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:09.315845  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:09.315901  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:09.346059  407330 cri.go:89] found id: ""
	I1210 06:37:09.346074  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.346081  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:09.346087  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:09.346144  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:09.381275  407330 cri.go:89] found id: ""
	I1210 06:37:09.381290  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.381297  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:09.381303  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:09.381366  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:09.414891  407330 cri.go:89] found id: ""
	I1210 06:37:09.414905  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.414912  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:09.414918  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:09.414979  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:09.443742  407330 cri.go:89] found id: ""
	I1210 06:37:09.443757  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.443763  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:09.443771  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:09.443781  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:09.510740  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:09.510762  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:09.526338  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:09.526355  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:09.590739  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:09.582122   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.582824   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.584640   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.585274   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.587041   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:09.582122   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.582824   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.584640   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.585274   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.587041   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:09.590750  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:09.590762  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:09.668271  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:09.668292  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:12.200039  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:12.210520  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:12.210590  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:12.237060  407330 cri.go:89] found id: ""
	I1210 06:37:12.237075  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.237083  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:12.237088  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:12.237160  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:12.263263  407330 cri.go:89] found id: ""
	I1210 06:37:12.263277  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.263284  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:12.263290  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:12.263354  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:12.291756  407330 cri.go:89] found id: ""
	I1210 06:37:12.291772  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.291780  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:12.291785  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:12.291847  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:12.321162  407330 cri.go:89] found id: ""
	I1210 06:37:12.321177  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.321213  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:12.321218  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:12.321279  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:12.347025  407330 cri.go:89] found id: ""
	I1210 06:37:12.347039  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.347054  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:12.347060  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:12.347121  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:12.376035  407330 cri.go:89] found id: ""
	I1210 06:37:12.376050  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.376058  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:12.376064  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:12.376126  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:12.410703  407330 cri.go:89] found id: ""
	I1210 06:37:12.410717  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.410724  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:12.410733  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:12.410744  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:12.486662  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:12.486686  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:12.502236  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:12.502255  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:12.568662  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:12.560235   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.561423   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.562538   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.563321   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.564916   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:12.560235   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.561423   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.562538   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.563321   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.564916   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:12.568672  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:12.568683  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:12.645878  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:12.645901  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:15.177927  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:15.191193  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:15.191288  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:15.219881  407330 cri.go:89] found id: ""
	I1210 06:37:15.219896  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.219904  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:15.219911  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:15.219971  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:15.247528  407330 cri.go:89] found id: ""
	I1210 06:37:15.247544  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.247551  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:15.247557  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:15.247620  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:15.274888  407330 cri.go:89] found id: ""
	I1210 06:37:15.274903  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.274911  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:15.274920  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:15.274979  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:15.300280  407330 cri.go:89] found id: ""
	I1210 06:37:15.300295  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.300302  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:15.300308  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:15.300369  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:15.325424  407330 cri.go:89] found id: ""
	I1210 06:37:15.325438  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.325445  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:15.325450  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:15.325512  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:15.359467  407330 cri.go:89] found id: ""
	I1210 06:37:15.359482  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.359490  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:15.359495  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:15.359551  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:15.399967  407330 cri.go:89] found id: ""
	I1210 06:37:15.399982  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.399990  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:15.399998  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:15.400019  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:15.477621  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:15.477643  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:15.493123  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:15.493140  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:15.564193  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:15.554932   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.555848   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.557673   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.558388   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.560046   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:15.554932   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.555848   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.557673   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.558388   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.560046   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:15.564206  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:15.564216  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:15.640233  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:15.640254  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:18.174394  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:18.186025  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:18.186097  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:18.215781  407330 cri.go:89] found id: ""
	I1210 06:37:18.215795  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.215814  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:18.215819  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:18.215877  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:18.241012  407330 cri.go:89] found id: ""
	I1210 06:37:18.241033  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.241044  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:18.241054  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:18.241155  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:18.270058  407330 cri.go:89] found id: ""
	I1210 06:37:18.270072  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.270079  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:18.270090  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:18.270147  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:18.297554  407330 cri.go:89] found id: ""
	I1210 06:37:18.297576  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.297593  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:18.297603  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:18.297695  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:18.330116  407330 cri.go:89] found id: ""
	I1210 06:37:18.330130  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.330136  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:18.330142  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:18.330217  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:18.360475  407330 cri.go:89] found id: ""
	I1210 06:37:18.360489  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.360496  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:18.360502  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:18.360570  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:18.393014  407330 cri.go:89] found id: ""
	I1210 06:37:18.393028  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.393035  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:18.393043  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:18.393064  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:18.412466  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:18.412484  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:18.485431  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:18.477889   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.478765   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.479651   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.480367   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.481965   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:18.477889   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.478765   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.479651   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.480367   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.481965   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:18.485441  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:18.485452  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:18.561043  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:18.561064  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:18.588628  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:18.588644  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:21.156119  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:21.166481  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:21.166541  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:21.191589  407330 cri.go:89] found id: ""
	I1210 06:37:21.191604  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.191611  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:21.191625  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:21.191689  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:21.217715  407330 cri.go:89] found id: ""
	I1210 06:37:21.217730  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.217738  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:21.217744  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:21.217811  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:21.246916  407330 cri.go:89] found id: ""
	I1210 06:37:21.246930  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.246945  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:21.246950  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:21.247005  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:21.271644  407330 cri.go:89] found id: ""
	I1210 06:37:21.271659  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.271666  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:21.271672  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:21.271739  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:21.299971  407330 cri.go:89] found id: ""
	I1210 06:37:21.299985  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.299993  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:21.299998  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:21.300057  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:21.325497  407330 cri.go:89] found id: ""
	I1210 06:37:21.325512  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.325519  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:21.325524  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:21.325583  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:21.351049  407330 cri.go:89] found id: ""
	I1210 06:37:21.351064  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.351071  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:21.351079  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:21.351095  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:21.421855  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:21.421874  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:21.437324  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:21.437341  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:21.499548  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:21.490639   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.491333   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493043   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493634   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.495290   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:21.490639   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.491333   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493043   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493634   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.495290   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:21.499604  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:21.499615  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:21.576803  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:21.576824  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:24.110608  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:24.121006  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:24.121068  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:24.146461  407330 cri.go:89] found id: ""
	I1210 06:37:24.146476  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.146483  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:24.146488  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:24.146601  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:24.172866  407330 cri.go:89] found id: ""
	I1210 06:37:24.172882  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.172889  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:24.172894  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:24.172956  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:24.199448  407330 cri.go:89] found id: ""
	I1210 06:37:24.199463  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.199470  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:24.199475  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:24.199535  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:24.229234  407330 cri.go:89] found id: ""
	I1210 06:37:24.229250  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.229257  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:24.229263  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:24.229323  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:24.254311  407330 cri.go:89] found id: ""
	I1210 06:37:24.254326  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.254334  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:24.254339  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:24.254401  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:24.284029  407330 cri.go:89] found id: ""
	I1210 06:37:24.284044  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.284051  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:24.284056  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:24.284131  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:24.309694  407330 cri.go:89] found id: ""
	I1210 06:37:24.309708  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.309715  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:24.309724  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:24.309735  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:24.372553  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:24.363947   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.364695   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.366686   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.367278   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.368967   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:24.363947   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.364695   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.366686   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.367278   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.368967   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:24.372563  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:24.372575  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:24.464562  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:24.464585  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:24.493762  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:24.493778  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:24.563092  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:24.563113  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:27.078938  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:27.089277  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:27.089338  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:27.114399  407330 cri.go:89] found id: ""
	I1210 06:37:27.114413  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.114421  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:27.114427  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:27.114491  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:27.144680  407330 cri.go:89] found id: ""
	I1210 06:37:27.144695  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.144702  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:27.144707  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:27.144765  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:27.168950  407330 cri.go:89] found id: ""
	I1210 06:37:27.168965  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.168972  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:27.168977  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:27.169034  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:27.196136  407330 cri.go:89] found id: ""
	I1210 06:37:27.196151  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.196159  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:27.196164  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:27.196221  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:27.225403  407330 cri.go:89] found id: ""
	I1210 06:37:27.225418  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.225426  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:27.225432  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:27.225492  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:27.252922  407330 cri.go:89] found id: ""
	I1210 06:37:27.252938  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.252945  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:27.252950  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:27.253009  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:27.278155  407330 cri.go:89] found id: ""
	I1210 06:37:27.278169  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.278177  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:27.278185  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:27.278197  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:27.309557  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:27.309573  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:27.385911  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:27.385939  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:27.404671  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:27.404689  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:27.482019  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:27.473831   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.474734   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476086   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476735   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.478362   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:27.473831   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.474734   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476086   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476735   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.478362   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:27.482029  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:27.482040  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:30.059859  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:30.073120  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:30.073221  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:30.104876  407330 cri.go:89] found id: ""
	I1210 06:37:30.104902  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.104910  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:30.104915  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:30.104992  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:30.133968  407330 cri.go:89] found id: ""
	I1210 06:37:30.133984  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.133999  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:30.134007  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:30.134079  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:30.162870  407330 cri.go:89] found id: ""
	I1210 06:37:30.162888  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.162895  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:30.162901  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:30.162965  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:30.190402  407330 cri.go:89] found id: ""
	I1210 06:37:30.190416  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.190424  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:30.190429  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:30.190488  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:30.219884  407330 cri.go:89] found id: ""
	I1210 06:37:30.219913  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.219920  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:30.219926  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:30.219999  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:30.246737  407330 cri.go:89] found id: ""
	I1210 06:37:30.246752  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.246760  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:30.246765  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:30.246825  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:30.273326  407330 cri.go:89] found id: ""
	I1210 06:37:30.273340  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.273348  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:30.273356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:30.273366  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:30.350646  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:30.350667  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:30.385499  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:30.385515  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:30.461766  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:30.461790  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:30.477421  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:30.477438  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:30.539694  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:30.532297   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.532864   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534312   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534817   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.536259   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:30.532297   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.532864   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534312   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534817   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.536259   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:33.041379  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:33.052111  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:33.052178  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:33.080472  407330 cri.go:89] found id: ""
	I1210 06:37:33.080487  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.080494  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:33.080499  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:33.080556  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:33.107304  407330 cri.go:89] found id: ""
	I1210 06:37:33.107319  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.107326  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:33.107331  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:33.107389  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:33.133653  407330 cri.go:89] found id: ""
	I1210 06:37:33.133668  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.133675  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:33.133680  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:33.133740  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:33.159244  407330 cri.go:89] found id: ""
	I1210 06:37:33.159259  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.159266  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:33.159272  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:33.159328  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:33.185378  407330 cri.go:89] found id: ""
	I1210 06:37:33.185393  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.185402  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:33.185407  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:33.185466  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:33.210558  407330 cri.go:89] found id: ""
	I1210 06:37:33.210588  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.210609  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:33.210615  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:33.210672  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:33.235742  407330 cri.go:89] found id: ""
	I1210 06:37:33.235756  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.235773  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:33.235782  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:33.235796  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:33.303992  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:33.304010  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:33.321348  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:33.321367  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:33.396780  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:33.385824   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.386759   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.387788   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.388485   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.390532   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:33.385824   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.386759   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.387788   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.388485   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.390532   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:33.396789  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:33.396800  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:33.483704  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:33.483727  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:36.014717  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:36.026269  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:36.026331  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:36.054956  407330 cri.go:89] found id: ""
	I1210 06:37:36.054982  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.054989  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:36.054995  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:36.055055  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:36.081454  407330 cri.go:89] found id: ""
	I1210 06:37:36.081470  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.081477  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:36.081483  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:36.081544  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:36.112094  407330 cri.go:89] found id: ""
	I1210 06:37:36.112108  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.112116  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:36.112121  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:36.112181  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:36.138426  407330 cri.go:89] found id: ""
	I1210 06:37:36.138441  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.138448  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:36.138453  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:36.138512  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:36.164608  407330 cri.go:89] found id: ""
	I1210 06:37:36.164623  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.164630  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:36.164637  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:36.164693  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:36.192038  407330 cri.go:89] found id: ""
	I1210 06:37:36.192052  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.192059  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:36.192064  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:36.192124  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:36.221044  407330 cri.go:89] found id: ""
	I1210 06:37:36.221058  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.221065  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:36.221073  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:36.221085  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:36.250907  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:36.250923  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:36.316733  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:36.316753  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:36.332493  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:36.332509  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:36.412829  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:36.401482   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404020   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404535   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.405958   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.407122   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:36.401482   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404020   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404535   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.405958   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.407122   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:36.412843  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:36.412857  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:39.007236  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:39.020585  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:39.020658  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:39.046864  407330 cri.go:89] found id: ""
	I1210 06:37:39.046879  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.046886  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:39.046892  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:39.046954  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:39.076119  407330 cri.go:89] found id: ""
	I1210 06:37:39.076143  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.076152  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:39.076157  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:39.076226  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:39.102655  407330 cri.go:89] found id: ""
	I1210 06:37:39.102671  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.102678  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:39.102684  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:39.102746  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:39.128306  407330 cri.go:89] found id: ""
	I1210 06:37:39.128320  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.128327  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:39.128333  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:39.128407  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:39.156045  407330 cri.go:89] found id: ""
	I1210 06:37:39.156069  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.156076  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:39.156087  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:39.156156  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:39.183781  407330 cri.go:89] found id: ""
	I1210 06:37:39.183796  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.183804  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:39.183809  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:39.183867  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:39.209244  407330 cri.go:89] found id: ""
	I1210 06:37:39.209258  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.209266  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:39.209273  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:39.209294  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:39.274373  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:39.274392  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:39.289765  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:39.289782  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:39.353525  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:39.345986   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.346357   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348003   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348560   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.350004   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:39.345986   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.346357   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348003   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348560   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.350004   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:39.353537  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:39.353548  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:39.432803  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:39.432822  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:41.965778  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:41.979117  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:41.979179  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:42.015640  407330 cri.go:89] found id: ""
	I1210 06:37:42.015658  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.015683  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:42.015689  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:42.015759  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:42.048532  407330 cri.go:89] found id: ""
	I1210 06:37:42.048546  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.048553  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:42.048559  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:42.048618  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:42.076982  407330 cri.go:89] found id: ""
	I1210 06:37:42.076998  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.077006  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:42.077012  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:42.077084  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:42.112254  407330 cri.go:89] found id: ""
	I1210 06:37:42.112295  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.112304  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:42.112312  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:42.112393  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:42.150624  407330 cri.go:89] found id: ""
	I1210 06:37:42.150640  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.150647  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:42.150653  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:42.150718  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:42.180813  407330 cri.go:89] found id: ""
	I1210 06:37:42.180845  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.180854  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:42.180860  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:42.180927  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:42.212103  407330 cri.go:89] found id: ""
	I1210 06:37:42.212120  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.212129  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:42.212139  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:42.212151  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:42.228371  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:42.228388  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:42.298333  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:42.290091   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.290977   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.292784   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.293526   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.294529   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:42.290091   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.290977   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.292784   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.293526   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.294529   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:42.298344  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:42.298363  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:42.375054  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:42.375076  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:42.409015  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:42.409031  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:44.985261  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:44.995937  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:44.995997  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:45.074766  407330 cri.go:89] found id: ""
	I1210 06:37:45.074782  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.074790  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:45.074805  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:45.074874  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:45.130730  407330 cri.go:89] found id: ""
	I1210 06:37:45.130747  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.130755  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:45.130760  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:45.130828  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:45.169030  407330 cri.go:89] found id: ""
	I1210 06:37:45.169058  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.169067  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:45.169073  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:45.169157  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:45.215800  407330 cri.go:89] found id: ""
	I1210 06:37:45.215826  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.215835  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:45.215841  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:45.215915  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:45.274656  407330 cri.go:89] found id: ""
	I1210 06:37:45.274675  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.274684  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:45.274689  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:45.274771  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:45.313260  407330 cri.go:89] found id: ""
	I1210 06:37:45.313277  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.313290  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:45.313296  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:45.313418  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:45.347971  407330 cri.go:89] found id: ""
	I1210 06:37:45.347997  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.348005  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:45.348014  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:45.348028  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:45.381763  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:45.381780  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:45.462459  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:45.462482  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:45.477837  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:45.477854  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:45.547658  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:45.539217   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.540334   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.541688   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.542195   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.543964   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:45.539217   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.540334   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.541688   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.542195   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.543964   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:45.547669  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:45.547680  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:48.124454  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:48.134803  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:48.134866  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:48.162481  407330 cri.go:89] found id: ""
	I1210 06:37:48.162498  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.162507  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:48.162512  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:48.162572  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:48.192262  407330 cri.go:89] found id: ""
	I1210 06:37:48.192276  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.192283  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:48.192289  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:48.192350  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:48.220715  407330 cri.go:89] found id: ""
	I1210 06:37:48.220730  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.220737  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:48.220742  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:48.220802  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:48.244954  407330 cri.go:89] found id: ""
	I1210 06:37:48.244968  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.244976  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:48.244981  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:48.245040  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:48.272316  407330 cri.go:89] found id: ""
	I1210 06:37:48.272330  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.272337  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:48.272343  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:48.272399  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:48.300204  407330 cri.go:89] found id: ""
	I1210 06:37:48.300219  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.300226  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:48.300232  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:48.300293  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:48.329747  407330 cri.go:89] found id: ""
	I1210 06:37:48.329762  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.329769  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:48.329777  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:48.329789  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:48.395638  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:48.395658  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:48.411092  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:48.411108  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:48.478819  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:48.470539   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.471330   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.472882   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.473423   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.475010   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:48.470539   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.471330   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.472882   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.473423   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.475010   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:48.478829  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:48.478841  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:48.556858  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:48.556880  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:51.087332  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:51.097952  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:51.098014  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:51.125310  407330 cri.go:89] found id: ""
	I1210 06:37:51.125325  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.125333  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:51.125345  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:51.125424  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:51.152518  407330 cri.go:89] found id: ""
	I1210 06:37:51.152533  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.152541  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:51.152547  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:51.152619  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:51.181199  407330 cri.go:89] found id: ""
	I1210 06:37:51.181214  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.181222  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:51.181233  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:51.181302  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:51.211368  407330 cri.go:89] found id: ""
	I1210 06:37:51.211382  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.211399  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:51.211405  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:51.211473  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:51.240371  407330 cri.go:89] found id: ""
	I1210 06:37:51.240386  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.240413  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:51.240420  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:51.240493  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:51.266983  407330 cri.go:89] found id: ""
	I1210 06:37:51.266998  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.267005  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:51.267010  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:51.267077  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:51.292392  407330 cri.go:89] found id: ""
	I1210 06:37:51.292417  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.292425  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:51.292433  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:51.292443  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:51.357098  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:51.357119  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:51.372292  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:51.372310  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:51.456874  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:51.448584   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.449513   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451286   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451619   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.453250   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:51.448584   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.449513   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451286   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451619   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.453250   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:51.456885  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:51.456896  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:51.532131  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:51.532155  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:54.070226  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:54.081032  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:54.081095  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:54.107855  407330 cri.go:89] found id: ""
	I1210 06:37:54.107871  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.107878  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:54.107884  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:54.107954  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:54.133470  407330 cri.go:89] found id: ""
	I1210 06:37:54.133484  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.133491  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:54.133496  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:54.133556  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:54.160836  407330 cri.go:89] found id: ""
	I1210 06:37:54.160851  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.160859  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:54.160864  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:54.160931  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:54.191664  407330 cri.go:89] found id: ""
	I1210 06:37:54.191679  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.191686  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:54.191692  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:54.191758  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:54.216267  407330 cri.go:89] found id: ""
	I1210 06:37:54.216280  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.216298  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:54.216303  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:54.216370  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:54.241369  407330 cri.go:89] found id: ""
	I1210 06:37:54.241383  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.241390  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:54.241395  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:54.241454  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:54.265711  407330 cri.go:89] found id: ""
	I1210 06:37:54.265725  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.265732  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:54.265740  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:54.265750  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:54.280292  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:54.280314  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:54.343110  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:54.335264   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.335904   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337479   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337979   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.339543   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:54.335264   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.335904   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337479   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337979   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.339543   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:54.343120  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:54.343131  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:54.421398  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:54.421417  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:54.457832  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:54.457849  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:57.030320  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:57.040862  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:57.040923  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:57.065817  407330 cri.go:89] found id: ""
	I1210 06:37:57.065832  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.065840  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:57.065845  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:57.065908  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:57.091828  407330 cri.go:89] found id: ""
	I1210 06:37:57.091842  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.091849  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:57.091855  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:57.091912  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:57.117033  407330 cri.go:89] found id: ""
	I1210 06:37:57.117047  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.117054  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:57.117060  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:57.117128  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:57.143007  407330 cri.go:89] found id: ""
	I1210 06:37:57.143021  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.143028  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:57.143034  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:57.143090  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:57.171364  407330 cri.go:89] found id: ""
	I1210 06:37:57.171379  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.171386  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:57.171391  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:57.171451  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:57.195695  407330 cri.go:89] found id: ""
	I1210 06:37:57.195723  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.195730  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:57.195736  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:57.195802  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:57.225018  407330 cri.go:89] found id: ""
	I1210 06:37:57.225033  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.225040  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:57.225049  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:57.225060  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:57.299878  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:57.291518   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.292410   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294182   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294649   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.296294   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:57.291518   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.292410   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294182   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294649   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.296294   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:57.299889  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:57.299899  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:57.377757  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:57.377778  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:57.420515  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:57.420531  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:57.493246  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:57.493267  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:00.010113  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:00.082560  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:00.082643  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:00.187405  407330 cri.go:89] found id: ""
	I1210 06:38:00.190377  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.190403  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:00.190413  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:00.190506  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:00.256368  407330 cri.go:89] found id: ""
	I1210 06:38:00.256395  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.256405  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:00.256411  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:00.256498  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:00.309570  407330 cri.go:89] found id: ""
	I1210 06:38:00.309587  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.309595  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:00.309602  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:00.309691  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:00.359167  407330 cri.go:89] found id: ""
	I1210 06:38:00.359184  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.359193  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:00.359199  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:00.359284  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:00.401533  407330 cri.go:89] found id: ""
	I1210 06:38:00.401549  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.401557  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:00.401562  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:00.401629  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:00.439769  407330 cri.go:89] found id: ""
	I1210 06:38:00.439784  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.439792  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:00.439797  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:00.439863  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:00.471369  407330 cri.go:89] found id: ""
	I1210 06:38:00.471384  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.471392  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:00.471400  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:00.471412  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:00.504494  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:00.504511  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:00.570722  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:00.570742  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:00.585662  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:00.585679  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:00.648503  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:00.640817   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.641687   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643282   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643584   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.645048   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:00.640817   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.641687   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643282   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643584   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.645048   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:00.648513  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:00.648524  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:03.225660  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:03.235918  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:03.235979  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:03.260969  407330 cri.go:89] found id: ""
	I1210 06:38:03.260984  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.260991  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:03.260996  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:03.261058  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:03.286700  407330 cri.go:89] found id: ""
	I1210 06:38:03.286714  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.286721  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:03.286726  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:03.286785  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:03.315672  407330 cri.go:89] found id: ""
	I1210 06:38:03.315686  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.315694  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:03.315699  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:03.315757  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:03.344486  407330 cri.go:89] found id: ""
	I1210 06:38:03.344501  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.344508  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:03.344517  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:03.344576  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:03.371038  407330 cri.go:89] found id: ""
	I1210 06:38:03.371052  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.371059  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:03.371064  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:03.371127  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:03.404397  407330 cri.go:89] found id: ""
	I1210 06:38:03.404412  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.404420  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:03.404425  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:03.404492  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:03.440935  407330 cri.go:89] found id: ""
	I1210 06:38:03.440949  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.440957  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:03.440965  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:03.440975  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:03.509589  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:03.509610  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:03.525492  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:03.525509  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:03.592907  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:03.584405   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.585238   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587039   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587639   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.589379   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:03.584405   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.585238   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587039   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587639   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.589379   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:03.592926  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:03.592938  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:03.669095  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:03.669114  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:06.198833  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:06.209381  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:06.209457  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:06.234410  407330 cri.go:89] found id: ""
	I1210 06:38:06.234424  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.234431  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:06.234437  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:06.234495  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:06.264001  407330 cri.go:89] found id: ""
	I1210 06:38:06.264016  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.264022  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:06.264028  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:06.264087  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:06.289353  407330 cri.go:89] found id: ""
	I1210 06:38:06.289367  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.289375  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:06.289380  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:06.289442  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:06.318627  407330 cri.go:89] found id: ""
	I1210 06:38:06.318643  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.318651  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:06.318656  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:06.318715  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:06.344169  407330 cri.go:89] found id: ""
	I1210 06:38:06.344183  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.344191  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:06.344196  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:06.344255  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:06.372255  407330 cri.go:89] found id: ""
	I1210 06:38:06.372270  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.372277  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:06.372283  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:06.372346  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:06.410561  407330 cri.go:89] found id: ""
	I1210 06:38:06.410575  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.410582  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:06.410590  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:06.410601  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:06.485685  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:06.485706  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:06.500886  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:06.500904  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:06.569054  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:06.561431   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.562119   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.563630   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.564134   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.565584   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:06.561431   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.562119   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.563630   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.564134   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.565584   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:06.569065  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:06.569078  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:06.650735  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:06.650760  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:09.182920  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:09.193744  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:09.193805  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:09.224238  407330 cri.go:89] found id: ""
	I1210 06:38:09.224253  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.224260  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:09.224265  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:09.224321  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:09.249812  407330 cri.go:89] found id: ""
	I1210 06:38:09.249827  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.249835  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:09.249840  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:09.249900  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:09.275012  407330 cri.go:89] found id: ""
	I1210 06:38:09.275025  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.275032  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:09.275037  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:09.275094  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:09.299472  407330 cri.go:89] found id: ""
	I1210 06:38:09.299500  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.299508  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:09.299513  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:09.299579  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:09.325485  407330 cri.go:89] found id: ""
	I1210 06:38:09.325499  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.325507  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:09.325512  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:09.325567  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:09.350568  407330 cri.go:89] found id: ""
	I1210 06:38:09.350582  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.350589  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:09.350594  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:09.350657  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:09.380510  407330 cri.go:89] found id: ""
	I1210 06:38:09.380524  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.380531  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:09.380548  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:09.380560  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:09.421824  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:09.421840  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:09.497738  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:09.497764  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:09.513692  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:09.513711  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:09.581478  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:09.573930   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.574589   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576111   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576487   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.577997   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:09.573930   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.574589   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576111   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576487   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.577997   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:09.581497  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:09.581507  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:12.158761  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:12.169119  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:12.169177  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:12.194655  407330 cri.go:89] found id: ""
	I1210 06:38:12.194670  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.194677  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:12.194683  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:12.194739  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:12.223200  407330 cri.go:89] found id: ""
	I1210 06:38:12.223216  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.223223  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:12.223228  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:12.223293  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:12.249017  407330 cri.go:89] found id: ""
	I1210 06:38:12.249032  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.249043  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:12.249049  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:12.249110  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:12.274392  407330 cri.go:89] found id: ""
	I1210 06:38:12.274407  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.274414  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:12.274420  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:12.274477  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:12.299224  407330 cri.go:89] found id: ""
	I1210 06:38:12.299238  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.299245  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:12.299250  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:12.299310  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:12.324356  407330 cri.go:89] found id: ""
	I1210 06:38:12.324370  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.324377  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:12.324383  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:12.324441  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:12.355846  407330 cri.go:89] found id: ""
	I1210 06:38:12.355876  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.355883  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:12.355892  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:12.355903  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:12.426588  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:12.426608  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:12.446044  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:12.446061  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:12.519015  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:12.508422   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.508965   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513107   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513691   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.515195   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:12.508422   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.508965   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513107   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513691   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.515195   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:12.519025  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:12.519036  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:12.595463  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:12.595494  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:15.126222  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:15.136973  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:15.137050  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:15.168527  407330 cri.go:89] found id: ""
	I1210 06:38:15.168542  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.168549  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:15.168554  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:15.168615  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:15.195472  407330 cri.go:89] found id: ""
	I1210 06:38:15.195488  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.195496  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:15.195501  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:15.195560  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:15.222272  407330 cri.go:89] found id: ""
	I1210 06:38:15.222286  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.222293  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:15.222298  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:15.222359  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:15.252445  407330 cri.go:89] found id: ""
	I1210 06:38:15.252460  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.252473  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:15.252479  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:15.252541  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:15.279037  407330 cri.go:89] found id: ""
	I1210 06:38:15.279056  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.279063  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:15.279069  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:15.279130  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:15.304272  407330 cri.go:89] found id: ""
	I1210 06:38:15.304287  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.304294  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:15.304299  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:15.304358  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:15.329937  407330 cri.go:89] found id: ""
	I1210 06:38:15.329951  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.329958  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:15.329965  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:15.329976  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:15.344908  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:15.344927  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:15.430525  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:15.420038   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.420859   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.422803   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.424594   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.426170   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:15.420038   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.420859   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.422803   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.424594   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.426170   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:15.430538  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:15.430549  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:15.506380  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:15.506403  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:15.535708  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:15.535725  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:18.102529  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:18.114363  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:18.114433  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:18.140986  407330 cri.go:89] found id: ""
	I1210 06:38:18.141000  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.141007  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:18.141012  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:18.141070  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:18.167798  407330 cri.go:89] found id: ""
	I1210 06:38:18.167812  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.167819  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:18.167827  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:18.167883  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:18.194514  407330 cri.go:89] found id: ""
	I1210 06:38:18.194539  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.194547  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:18.194553  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:18.194614  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:18.219929  407330 cri.go:89] found id: ""
	I1210 06:38:18.219943  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.219949  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:18.219955  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:18.220013  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:18.247728  407330 cri.go:89] found id: ""
	I1210 06:38:18.247742  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.247749  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:18.247755  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:18.247814  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:18.274948  407330 cri.go:89] found id: ""
	I1210 06:38:18.274963  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.274971  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:18.274976  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:18.275034  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:18.301159  407330 cri.go:89] found id: ""
	I1210 06:38:18.301173  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.301196  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:18.301204  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:18.301222  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:18.337936  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:18.337955  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:18.404135  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:18.404153  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:18.420644  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:18.420661  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:18.488180  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:18.479576   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.480035   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.481748   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.482513   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.484281   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:18.479576   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.480035   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.481748   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.482513   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.484281   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:18.488199  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:18.488210  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:21.064064  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:21.074224  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:21.074283  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:21.100332  407330 cri.go:89] found id: ""
	I1210 06:38:21.100347  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.100354  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:21.100359  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:21.100416  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:21.128496  407330 cri.go:89] found id: ""
	I1210 06:38:21.128511  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.128518  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:21.128523  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:21.128583  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:21.165661  407330 cri.go:89] found id: ""
	I1210 06:38:21.165675  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.165682  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:21.165687  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:21.165745  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:21.191177  407330 cri.go:89] found id: ""
	I1210 06:38:21.191191  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.191199  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:21.191204  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:21.191262  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:21.217247  407330 cri.go:89] found id: ""
	I1210 06:38:21.217263  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.217270  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:21.217275  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:21.217336  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:21.243649  407330 cri.go:89] found id: ""
	I1210 06:38:21.243663  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.243670  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:21.243675  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:21.243731  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:21.272574  407330 cri.go:89] found id: ""
	I1210 06:38:21.272589  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.272596  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:21.272604  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:21.272615  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:21.336563  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:21.328507   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.329001   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.330691   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.331320   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.332859   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:21.328507   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.329001   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.330691   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.331320   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.332859   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:21.336573  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:21.336583  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:21.419141  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:21.419163  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:21.452486  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:21.452504  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:21.518913  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:21.518934  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:24.035407  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:24.051364  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:24.051491  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:24.079890  407330 cri.go:89] found id: ""
	I1210 06:38:24.079905  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.079913  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:24.079918  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:24.079976  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:24.108058  407330 cri.go:89] found id: ""
	I1210 06:38:24.108072  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.108089  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:24.108094  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:24.108160  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:24.136304  407330 cri.go:89] found id: ""
	I1210 06:38:24.136318  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.136325  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:24.136331  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:24.136388  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:24.166784  407330 cri.go:89] found id: ""
	I1210 06:38:24.166805  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.166813  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:24.166819  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:24.166879  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:24.194254  407330 cri.go:89] found id: ""
	I1210 06:38:24.194270  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.194278  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:24.194283  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:24.194349  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:24.220032  407330 cri.go:89] found id: ""
	I1210 06:38:24.220046  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.220053  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:24.220058  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:24.220125  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:24.249252  407330 cri.go:89] found id: ""
	I1210 06:38:24.249267  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.249275  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:24.249282  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:24.249301  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:24.332782  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:24.332809  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:24.363293  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:24.363313  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:24.439310  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:24.439334  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:24.454866  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:24.454883  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:24.518646  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:24.510636   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.511199   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.512759   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.513269   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.514934   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:24.510636   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.511199   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.512759   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.513269   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.514934   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:27.018916  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:27.029680  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:27.029748  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:27.057853  407330 cri.go:89] found id: ""
	I1210 06:38:27.057868  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.057876  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:27.057881  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:27.057943  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:27.088489  407330 cri.go:89] found id: ""
	I1210 06:38:27.088504  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.088512  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:27.088517  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:27.088576  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:27.114135  407330 cri.go:89] found id: ""
	I1210 06:38:27.114150  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.114158  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:27.114163  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:27.114222  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:27.144417  407330 cri.go:89] found id: ""
	I1210 06:38:27.144431  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.144438  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:27.144443  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:27.144502  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:27.170599  407330 cri.go:89] found id: ""
	I1210 06:38:27.170613  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.170621  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:27.170626  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:27.170704  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:27.196493  407330 cri.go:89] found id: ""
	I1210 06:38:27.196508  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.196516  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:27.196521  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:27.196577  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:27.222440  407330 cri.go:89] found id: ""
	I1210 06:38:27.222455  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.222462  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:27.222469  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:27.222480  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:27.288558  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:27.288578  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:27.304274  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:27.304290  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:27.370398  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:27.361823   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.362522   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364129   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364518   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.366357   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:27.361823   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.362522   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364129   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364518   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.366357   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:27.370408  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:27.370419  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:27.458800  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:27.458821  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:29.988954  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:29.999798  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:29.999864  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:30.095338  407330 cri.go:89] found id: ""
	I1210 06:38:30.095356  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.095364  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:30.095370  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:30.095440  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:30.129132  407330 cri.go:89] found id: ""
	I1210 06:38:30.129148  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.129156  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:30.129162  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:30.129271  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:30.157101  407330 cri.go:89] found id: ""
	I1210 06:38:30.157117  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.157124  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:30.157130  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:30.157224  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:30.184791  407330 cri.go:89] found id: ""
	I1210 06:38:30.184806  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.184814  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:30.184819  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:30.184885  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:30.211932  407330 cri.go:89] found id: ""
	I1210 06:38:30.211958  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.211966  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:30.211971  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:30.212041  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:30.238373  407330 cri.go:89] found id: ""
	I1210 06:38:30.238398  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.238407  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:30.238413  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:30.238479  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:30.266144  407330 cri.go:89] found id: ""
	I1210 06:38:30.266159  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.266167  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:30.266176  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:30.266187  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:30.337549  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:30.337570  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:30.353715  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:30.353731  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:30.430797  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:30.422887   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.423661   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425295   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425615   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.427098   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:30.422887   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.423661   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425295   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425615   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.427098   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:30.430808  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:30.430821  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:30.510900  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:30.510921  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:33.040458  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:33.051069  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:33.051132  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:33.081117  407330 cri.go:89] found id: ""
	I1210 06:38:33.081131  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.081138  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:33.081144  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:33.081232  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:33.110972  407330 cri.go:89] found id: ""
	I1210 06:38:33.110986  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.110993  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:33.110998  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:33.111055  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:33.136083  407330 cri.go:89] found id: ""
	I1210 06:38:33.136098  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.136104  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:33.136110  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:33.136170  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:33.162539  407330 cri.go:89] found id: ""
	I1210 06:38:33.162554  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.162561  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:33.162567  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:33.162628  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:33.192025  407330 cri.go:89] found id: ""
	I1210 06:38:33.192039  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.192047  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:33.192053  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:33.192114  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:33.217529  407330 cri.go:89] found id: ""
	I1210 06:38:33.217544  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.217562  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:33.217568  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:33.217637  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:33.242901  407330 cri.go:89] found id: ""
	I1210 06:38:33.242916  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.242923  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:33.242931  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:33.242942  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:33.311877  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:33.311897  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:33.327423  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:33.327438  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:33.395423  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:33.386462   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.387346   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.388905   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.389556   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.391613   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:33.386462   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.387346   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.388905   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.389556   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.391613   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:33.395434  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:33.395444  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:33.477529  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:33.477551  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:36.008120  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:36.021683  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:36.021745  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:36.049460  407330 cri.go:89] found id: ""
	I1210 06:38:36.049475  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.049482  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:36.049487  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:36.049560  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:36.076929  407330 cri.go:89] found id: ""
	I1210 06:38:36.076944  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.076951  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:36.076956  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:36.077017  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:36.103193  407330 cri.go:89] found id: ""
	I1210 06:38:36.103208  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.103214  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:36.103219  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:36.103285  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:36.129995  407330 cri.go:89] found id: ""
	I1210 06:38:36.130009  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.130024  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:36.130029  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:36.130087  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:36.156753  407330 cri.go:89] found id: ""
	I1210 06:38:36.156781  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.156789  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:36.156794  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:36.156857  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:36.188439  407330 cri.go:89] found id: ""
	I1210 06:38:36.188453  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.188461  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:36.188466  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:36.188525  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:36.214278  407330 cri.go:89] found id: ""
	I1210 06:38:36.214293  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.214300  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:36.214309  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:36.214321  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:36.280730  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:36.280750  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:36.296203  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:36.296220  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:36.364197  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:36.355593   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.356352   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358159   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358872   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.360561   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:36.355593   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.356352   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358159   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358872   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.360561   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:36.364209  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:36.364222  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:36.458076  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:36.458097  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:38.987911  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:38.998557  407330 kubeadm.go:602] duration metric: took 4m3.870918207s to restartPrimaryControlPlane
	W1210 06:38:38.998620  407330 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 06:38:38.998704  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 06:38:39.409934  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:38:39.423184  407330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:38:39.431304  407330 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:38:39.431358  407330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:38:39.439341  407330 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:38:39.439350  407330 kubeadm.go:158] found existing configuration files:
	
	I1210 06:38:39.439401  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:38:39.447538  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:38:39.447592  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:38:39.454886  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:38:39.462719  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:38:39.462778  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:38:39.470357  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:38:39.477894  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:38:39.477950  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:38:39.485341  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:38:39.493235  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:38:39.493292  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:38:39.500743  407330 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:38:39.538320  407330 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:38:39.538555  407330 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:38:39.610131  407330 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:38:39.610196  407330 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:38:39.610230  407330 kubeadm.go:319] OS: Linux
	I1210 06:38:39.610281  407330 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:38:39.610328  407330 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:38:39.610374  407330 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:38:39.610421  407330 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:38:39.610468  407330 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:38:39.610517  407330 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:38:39.610561  407330 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:38:39.610608  407330 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:38:39.610653  407330 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:38:39.676087  407330 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:38:39.676189  407330 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:38:39.676279  407330 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:38:39.683789  407330 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:38:39.689387  407330 out.go:252]   - Generating certificates and keys ...
	I1210 06:38:39.689490  407330 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:38:39.689554  407330 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:38:39.689629  407330 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:38:39.689689  407330 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:38:39.689759  407330 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:38:39.689811  407330 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:38:39.689904  407330 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:38:39.689978  407330 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:38:39.690060  407330 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:38:39.690139  407330 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:38:39.690176  407330 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:38:39.690241  407330 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:38:40.131783  407330 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:38:40.503719  407330 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:38:40.658362  407330 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:38:41.256208  407330 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:38:41.407412  407330 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:38:41.408125  407330 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:38:41.410853  407330 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:38:41.414436  407330 out.go:252]   - Booting up control plane ...
	I1210 06:38:41.414546  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:38:41.414623  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:38:41.414696  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:38:41.431657  407330 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:38:41.431964  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:38:41.440211  407330 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:38:41.440329  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:38:41.440568  407330 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:38:41.565122  407330 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:38:41.565287  407330 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:42:41.565436  407330 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000253721s
	I1210 06:42:41.565465  407330 kubeadm.go:319] 
	I1210 06:42:41.565522  407330 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:42:41.565554  407330 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:42:41.565658  407330 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:42:41.565663  407330 kubeadm.go:319] 
	I1210 06:42:41.565766  407330 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:42:41.565797  407330 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:42:41.565827  407330 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:42:41.565830  407330 kubeadm.go:319] 
	I1210 06:42:41.570718  407330 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:42:41.571209  407330 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:42:41.571330  407330 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:42:41.571595  407330 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:42:41.571607  407330 kubeadm.go:319] 
	I1210 06:42:41.571752  407330 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:42:41.571857  407330 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000253721s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:42:41.571950  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 06:42:41.983114  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:42:41.996619  407330 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:42:41.996677  407330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:42:42.015710  407330 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:42:42.015721  407330 kubeadm.go:158] found existing configuration files:
	
	I1210 06:42:42.015783  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:42:42.031380  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:42:42.031448  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:42:42.040300  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:42:42.049113  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:42:42.049177  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:42:42.057272  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:42:42.066509  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:42:42.066573  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:42:42.076663  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:42:42.086749  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:42:42.086829  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:42:42.096582  407330 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:42:42.144385  407330 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:42:42.144469  407330 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:42:42.248727  407330 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:42:42.248801  407330 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:42:42.248835  407330 kubeadm.go:319] OS: Linux
	I1210 06:42:42.248888  407330 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:42:42.248946  407330 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:42:42.249004  407330 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:42:42.249052  407330 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:42:42.249117  407330 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:42:42.249198  407330 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:42:42.249245  407330 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:42:42.249306  407330 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:42:42.249359  407330 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:42:42.316721  407330 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:42:42.316825  407330 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:42:42.316916  407330 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:42:42.325666  407330 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:42:42.330985  407330 out.go:252]   - Generating certificates and keys ...
	I1210 06:42:42.331095  407330 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:42:42.331182  407330 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:42:42.331258  407330 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:42:42.331331  407330 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:42:42.331424  407330 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:42:42.331487  407330 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:42:42.331560  407330 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:42:42.331637  407330 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:42:42.331721  407330 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:42:42.331801  407330 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:42:42.331847  407330 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:42:42.331912  407330 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:42:42.541750  407330 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:42:43.048349  407330 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:42:43.167759  407330 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:42:43.323314  407330 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:42:43.407090  407330 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:42:43.408333  407330 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:42:43.412234  407330 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:42:43.415621  407330 out.go:252]   - Booting up control plane ...
	I1210 06:42:43.415734  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:42:43.415811  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:42:43.416436  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:42:43.431439  407330 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:42:43.431813  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:42:43.438586  407330 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:42:43.438900  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:42:43.438951  407330 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:42:43.563199  407330 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:42:43.563333  407330 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:46:43.563419  407330 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000308988s
	I1210 06:46:43.563446  407330 kubeadm.go:319] 
	I1210 06:46:43.563502  407330 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:46:43.563534  407330 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:46:43.563637  407330 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:46:43.563641  407330 kubeadm.go:319] 
	I1210 06:46:43.563744  407330 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:46:43.563775  407330 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:46:43.563804  407330 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:46:43.563807  407330 kubeadm.go:319] 
	I1210 06:46:43.567965  407330 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:46:43.568389  407330 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:46:43.568496  407330 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:46:43.568730  407330 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:46:43.568734  407330 kubeadm.go:319] 
	I1210 06:46:43.568801  407330 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:46:43.568851  407330 kubeadm.go:403] duration metric: took 12m8.481939807s to StartCluster
	I1210 06:46:43.568881  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:46:43.568941  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:46:43.595798  407330 cri.go:89] found id: ""
	I1210 06:46:43.595831  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.595854  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:46:43.595860  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:46:43.595925  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:46:43.621092  407330 cri.go:89] found id: ""
	I1210 06:46:43.621107  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.621114  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:46:43.621123  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:46:43.621181  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:46:43.646506  407330 cri.go:89] found id: ""
	I1210 06:46:43.646520  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.646528  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:46:43.646533  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:46:43.646593  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:46:43.671975  407330 cri.go:89] found id: ""
	I1210 06:46:43.671990  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.671997  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:46:43.672003  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:46:43.672059  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:46:43.698910  407330 cri.go:89] found id: ""
	I1210 06:46:43.698925  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.698932  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:46:43.698937  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:46:43.698997  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:46:43.727644  407330 cri.go:89] found id: ""
	I1210 06:46:43.727660  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.727667  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:46:43.727672  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:46:43.727732  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:46:43.752849  407330 cri.go:89] found id: ""
	I1210 06:46:43.752864  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.752871  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:46:43.752879  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:46:43.752889  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:46:43.818161  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:46:43.818181  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:46:43.833400  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:46:43.833417  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:46:43.902591  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:46:43.894870   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.895546   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897128   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897673   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.899158   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:46:43.894870   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.895546   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897128   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897673   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.899158   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:46:43.902602  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:46:43.902614  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:46:43.975424  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:46:43.975445  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 06:46:44.022327  407330 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:46:44.022377  407330 out.go:285] * 
	W1210 06:46:44.022442  407330 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:46:44.022452  407330 out.go:285] * 
	W1210 06:46:44.024584  407330 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:46:44.031496  407330 out.go:203] 
	W1210 06:46:44.034389  407330 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:46:44.034453  407330 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:46:44.034475  407330 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:46:44.037811  407330 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914305234Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914347581Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914410941Z" level=info msg="Create NRI interface"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914519907Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914528243Z" level=info msg="runtime interface created"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914540707Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914547246Z" level=info msg="runtime interface starting up..."
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914553523Z" level=info msg="starting plugins..."
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914566389Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914635518Z" level=info msg="No systemd watchdog enabled"
	Dec 10 06:34:32 functional-253997 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.679749304Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=256aed1f-deb7-4ef3-85cd-131eefce5f31 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.680508073Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=d66c85ac-bdac-47c8-b0cb-0b9c6495c2c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.681012677Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=9d08e49c-548c-44b3-98b1-7f3a5851a031 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.681572306Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=0bc6e3be-4b4d-4362-bc99-b8372d06365e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.681969496Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2f86c405-f63c-4d07-a2ec-618b9449eabe name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.682410707Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f71d0106-3216-4008-9111-b1a84be0126f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.682849883Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=c187c18f-0638-4353-a242-3d51d64c2a33 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.320018913Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=2592a99c-52fd-46d2-9cab-44207ad0e0c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.321028729Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=f6ff0057-6b99-4df5-9aca-bca9adc94791 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.321868157Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=9722b5ff-a4b3-445c-b3c4-2f4e55341b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.322453627Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=b61d672b-0949-4316-8a7f-4071c86b03d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.322949111Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=450c82e8-dfb9-4142-8b40-08c4b80c0ef4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.32354821Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=0920e0ed-8fd6-46c5-8404-88b74f388f67 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.324043932Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=51273fc0-dd5c-4357-b908-13dc96e1efa4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:48:55.272105   23944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:48:55.272658   23944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:48:55.273973   23944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:48:55.274379   23944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:48:55.275828   23944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 03:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015165] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514331] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032961] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807794] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.310854] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 04:51] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000001f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000bd7dad86{9P.session} n=00000000db0bc116
	[  +0.001142] FS-Cache: O-key=[10] '34323936323939323530'
	[  +0.000773] FS-Cache: N-cookie c=00000020 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000bd7dad86{9P.session} n=00000000040fb2e3
	[  +0.001079] FS-Cache: N-key=[10] '34323936323939323530'
	[Dec10 04:56] hrtimer: interrupt took 8286122 ns
	[Dec10 06:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 06:10] overlayfs: idmapped layers are currently not supported
	[ +21.611451] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 06:15] overlayfs: idmapped layers are currently not supported
	[Dec10 06:16] overlayfs: idmapped layers are currently not supported
	[Dec10 06:19] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:48:55 up  3:31,  0 user,  load average: 0.52, 0.25, 0.45
	Linux functional-253997 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:48:52 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:48:53 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 10 06:48:53 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:53 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:53 functional-253997 kubelet[23788]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:53 functional-253997 kubelet[23788]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:53 functional-253997 kubelet[23788]: E1210 06:48:53.196408   23788 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:48:53 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:48:53 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:48:53 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 10 06:48:53 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:53 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:53 functional-253997 kubelet[23826]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:53 functional-253997 kubelet[23826]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:53 functional-253997 kubelet[23826]: E1210 06:48:53.920115   23826 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:48:53 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:48:53 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:48:54 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 10 06:48:54 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:54 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:54 functional-253997 kubelet[23860]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:54 functional-253997 kubelet[23860]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:54 functional-253997 kubelet[23860]: E1210 06:48:54.685352   23860 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:48:54 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:48:54 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997: exit status 2 (377.690037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-253997" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (3.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-253997 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-253997 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (59.388534ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-253997 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-253997 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-253997 describe po hello-node-connect: exit status 1 (58.378378ms)

                                                
                                                
** stderr ** 
	E1210 06:48:39.299346  421544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:48:39.300829  421544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:48:39.302321  421544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:48:39.303797  421544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:48:39.305311  421544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-253997 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-253997 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-253997 logs -l app=hello-node-connect: exit status 1 (60.05651ms)

                                                
                                                
** stderr ** 
	E1210 06:48:39.360426  421548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:48:39.362015  421548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:48:39.363498  421548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:48:39.364958  421548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-253997 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-253997 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-253997 describe svc hello-node-connect: exit status 1 (65.873736ms)

                                                
                                                
** stderr ** 
	E1210 06:48:39.425444  421552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:48:39.427033  421552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:48:39.428480  421552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:48:39.429913  421552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:48:39.431343  421552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-253997 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-253997
helpers_test.go:244: (dbg) docker inspect functional-253997:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	        "Created": "2025-12-10T06:19:33.832297734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 395175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:19:33.914699563Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hosts",
	        "LogPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7-json.log",
	        "Name": "/functional-253997",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-253997:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-253997",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	                "LowerDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-253997",
	                "Source": "/var/lib/docker/volumes/functional-253997/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-253997",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-253997",
	                "name.minikube.sigs.k8s.io": "functional-253997",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "484c42edbd556e17e2c5a836571d3275bc9cbd157b70c100e08a5b73322a62e6",
	            "SandboxKey": "/var/run/docker/netns/484c42edbd55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-253997": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ba:52:c5:6e:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fc5aadc020d5981cfdab05726de722ae81d653cbc30b66014568069657370fe7",
	                    "EndpointID": "9ecd4882ea5c9fe5581b68ad15a0a2f450e34777aee426fcc158008c81076e5a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-253997",
	                        "256e059cdf1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997: exit status 2 (312.386296ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-253997 logs -n 25: (1.028552317s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-253997 cache reload                                                                                                                               │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ ssh     │ functional-253997 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │ 10 Dec 25 06:34 UTC │
	│ kubectl │ functional-253997 kubectl -- --context functional-253997 get pods                                                                                            │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │                     │
	│ start   │ -p functional-253997 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:34 UTC │                     │
	│ cp      │ functional-253997 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │ 10 Dec 25 06:46 UTC │
	│ config  │ functional-253997 config unset cpus                                                                                                                          │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │ 10 Dec 25 06:46 UTC │
	│ config  │ functional-253997 config get cpus                                                                                                                            │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │                     │
	│ config  │ functional-253997 config set cpus 2                                                                                                                          │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │ 10 Dec 25 06:46 UTC │
	│ config  │ functional-253997 config get cpus                                                                                                                            │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │ 10 Dec 25 06:46 UTC │
	│ config  │ functional-253997 config unset cpus                                                                                                                          │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │ 10 Dec 25 06:46 UTC │
	│ ssh     │ functional-253997 ssh -n functional-253997 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │ 10 Dec 25 06:46 UTC │
	│ config  │ functional-253997 config get cpus                                                                                                                            │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │                     │
	│ ssh     │ functional-253997 ssh echo hello                                                                                                                             │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │ 10 Dec 25 06:46 UTC │
	│ cp      │ functional-253997 cp functional-253997:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm2050588435/001/cp-test.txt │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │ 10 Dec 25 06:46 UTC │
	│ ssh     │ functional-253997 ssh cat /etc/hostname                                                                                                                      │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │ 10 Dec 25 06:46 UTC │
	│ ssh     │ functional-253997 ssh -n functional-253997 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │ 10 Dec 25 06:46 UTC │
	│ tunnel  │ functional-253997 tunnel --alsologtostderr                                                                                                                   │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │                     │
	│ tunnel  │ functional-253997 tunnel --alsologtostderr                                                                                                                   │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │                     │
	│ cp      │ functional-253997 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │ 10 Dec 25 06:46 UTC │
	│ tunnel  │ functional-253997 tunnel --alsologtostderr                                                                                                                   │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │                     │
	│ ssh     │ functional-253997 ssh -n functional-253997 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │ 10 Dec 25 06:46 UTC │
	│ addons  │ functional-253997 addons list                                                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ addons  │ functional-253997 addons list -o json                                                                                                                        │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:34:29
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:34:29.186876  407330 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:34:29.187053  407330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:34:29.187058  407330 out.go:374] Setting ErrFile to fd 2...
	I1210 06:34:29.187062  407330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:34:29.187341  407330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:34:29.187713  407330 out.go:368] Setting JSON to false
	I1210 06:34:29.188576  407330 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11822,"bootTime":1765336648,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:34:29.188634  407330 start.go:143] virtualization:  
	I1210 06:34:29.192149  407330 out.go:179] * [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:34:29.195073  407330 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:34:29.195162  407330 notify.go:221] Checking for updates...
	I1210 06:34:29.200831  407330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:34:29.203909  407330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:34:29.206776  407330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:34:29.209617  407330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:34:29.212440  407330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:34:29.215839  407330 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:34:29.215937  407330 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:34:29.239404  407330 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:34:29.239516  407330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:34:29.302303  407330 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:34:29.292878865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:34:29.302405  407330 docker.go:319] overlay module found
	I1210 06:34:29.305588  407330 out.go:179] * Using the docker driver based on existing profile
	I1210 06:34:29.308369  407330 start.go:309] selected driver: docker
	I1210 06:34:29.308379  407330 start.go:927] validating driver "docker" against &{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:34:29.308484  407330 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:34:29.308590  407330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:34:29.367055  407330 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 06:34:29.35802689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:34:29.367451  407330 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:34:29.367476  407330 cni.go:84] Creating CNI manager for ""
	I1210 06:34:29.367527  407330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:34:29.367575  407330 start.go:353] cluster config:
	{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:34:29.370834  407330 out.go:179] * Starting "functional-253997" primary control-plane node in "functional-253997" cluster
	I1210 06:34:29.373779  407330 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:34:29.376601  407330 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:34:29.379406  407330 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:34:29.379504  407330 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:34:29.398798  407330 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:34:29.398809  407330 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:34:29.439425  407330 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 06:34:29.641198  407330 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 06:34:29.641344  407330 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/config.json ...
	I1210 06:34:29.641548  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:29.641601  407330 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:34:29.641630  407330 start.go:360] acquireMachinesLock for functional-253997: {Name:mkd4a204596bf14d7530a6c5c103756527acdf26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:29.641675  407330 start.go:364] duration metric: took 26.355µs to acquireMachinesLock for "functional-253997"
	I1210 06:34:29.641688  407330 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:34:29.641692  407330 fix.go:54] fixHost starting: 
	I1210 06:34:29.641950  407330 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
	I1210 06:34:29.660018  407330 fix.go:112] recreateIfNeeded on functional-253997: state=Running err=<nil>
	W1210 06:34:29.660039  407330 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:34:29.663260  407330 out.go:252] * Updating the running docker "functional-253997" container ...
	I1210 06:34:29.663287  407330 machine.go:94] provisionDockerMachine start ...
	I1210 06:34:29.663366  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:29.683378  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:29.683692  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:29.683698  407330 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:34:29.821832  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:29.837224  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:34:29.837239  407330 ubuntu.go:182] provisioning hostname "functional-253997"
	I1210 06:34:29.837320  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:29.868971  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:29.869301  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:29.869310  407330 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-253997 && echo "functional-253997" | sudo tee /etc/hostname
	I1210 06:34:29.986840  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:30.112009  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-253997
	
	I1210 06:34:30.112104  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:30.132596  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:30.132908  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:30.132923  407330 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-253997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-253997/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-253997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:34:30.208840  407330 cache.go:107] acquiring lock: {Name:mk9268471d41d450748d4b0133d1a043378776f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.208835  407330 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.208914  407330 cache.go:107] acquiring lock: {Name:mk8250dc655f821bf2674ed77a7683798ede4f4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.208957  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:34:30.208967  407330 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 138.989µs
	I1210 06:34:30.208975  407330 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:34:30.208986  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:34:30.209001  407330 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 97.733µs
	I1210 06:34:30.208999  407330 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209007  407330 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:34:30.209031  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:34:30.209036  407330 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.599µs
	I1210 06:34:30.209024  407330 cache.go:107] acquiring lock: {Name:mk1c0334e51772e4f6b3429cc05b91ec19d54b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209041  407330 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:34:30.209051  407330 cache.go:107] acquiring lock: {Name:mkc2831727d74e5fadb1c00bd89f0236b385200e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209067  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:34:30.209072  407330 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 53.268µs
	I1210 06:34:30.209089  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:34:30.209088  407330 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:34:30.209095  407330 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 45.753µs
	I1210 06:34:30.209100  407330 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:34:30.209108  407330 cache.go:107] acquiring lock: {Name:mk7e052900e41419dc78d3311f27db508a8f64b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209102  407330 cache.go:107] acquiring lock: {Name:mk41aaba55a3296a937cf380d44dc9920df69546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:34:30.209134  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:34:30.209138  407330 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 31.27µs
	I1210 06:34:30.209143  407330 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:34:30.209145  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:34:30.209151  407330 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 50.536µs
	I1210 06:34:30.209155  407330 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:34:30.209160  407330 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:34:30.209163  407330 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 351.676µs
	I1210 06:34:30.209168  407330 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:34:30.209180  407330 cache.go:87] Successfully saved all images to host disk.
	I1210 06:34:30.290041  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:34:30.290057  407330 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-362392/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-362392/.minikube}
	I1210 06:34:30.290077  407330 ubuntu.go:190] setting up certificates
	I1210 06:34:30.290086  407330 provision.go:84] configureAuth start
	I1210 06:34:30.290163  407330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:34:30.308042  407330 provision.go:143] copyHostCerts
	I1210 06:34:30.308132  407330 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem, removing ...
	I1210 06:34:30.308140  407330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 06:34:30.308215  407330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem (1078 bytes)
	I1210 06:34:30.308356  407330 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem, removing ...
	I1210 06:34:30.308366  407330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 06:34:30.308393  407330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem (1123 bytes)
	I1210 06:34:30.308451  407330 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem, removing ...
	I1210 06:34:30.308454  407330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 06:34:30.308477  407330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem (1675 bytes)
	I1210 06:34:30.308526  407330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem org=jenkins.functional-253997 san=[127.0.0.1 192.168.49.2 functional-253997 localhost minikube]
	I1210 06:34:30.594902  407330 provision.go:177] copyRemoteCerts
	I1210 06:34:30.594965  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:34:30.595003  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:30.611740  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:30.721082  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:34:30.738821  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:34:30.756666  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:34:30.774292  407330 provision.go:87] duration metric: took 484.176925ms to configureAuth
	I1210 06:34:30.774310  407330 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:34:30.774512  407330 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:34:30.774629  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:30.792842  407330 main.go:143] libmachine: Using SSH client type: native
	I1210 06:34:30.793168  407330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I1210 06:34:30.793179  407330 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:34:31.164456  407330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:34:31.164470  407330 machine.go:97] duration metric: took 1.501175708s to provisionDockerMachine
	I1210 06:34:31.164497  407330 start.go:293] postStartSetup for "functional-253997" (driver="docker")
	I1210 06:34:31.164510  407330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:34:31.164571  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:34:31.164607  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.185147  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.293395  407330 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:34:31.296969  407330 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:34:31.296987  407330 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:34:31.296998  407330 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/addons for local assets ...
	I1210 06:34:31.297053  407330 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/files for local assets ...
	I1210 06:34:31.297133  407330 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> 3642652.pem in /etc/ssl/certs
	I1210 06:34:31.297238  407330 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts -> hosts in /etc/test/nested/copy/364265
	I1210 06:34:31.297285  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364265
	I1210 06:34:31.305181  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:34:31.324368  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts --> /etc/test/nested/copy/364265/hosts (40 bytes)
	I1210 06:34:31.342686  407330 start.go:296] duration metric: took 178.173087ms for postStartSetup
	I1210 06:34:31.342778  407330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:34:31.342817  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.360907  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.462708  407330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:34:31.467744  407330 fix.go:56] duration metric: took 1.826044535s for fixHost
	I1210 06:34:31.467760  407330 start.go:83] releasing machines lock for "functional-253997", held for 1.826077816s
	I1210 06:34:31.467840  407330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-253997
	I1210 06:34:31.485284  407330 ssh_runner.go:195] Run: cat /version.json
	I1210 06:34:31.485341  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.485360  407330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:34:31.485410  407330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
	I1210 06:34:31.504331  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.505583  407330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
	I1210 06:34:31.702850  407330 ssh_runner.go:195] Run: systemctl --version
	I1210 06:34:31.710100  407330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:34:31.751135  407330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:34:31.755552  407330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:34:31.755612  407330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:34:31.763681  407330 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:34:31.763695  407330 start.go:496] detecting cgroup driver to use...
	I1210 06:34:31.763726  407330 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:34:31.763773  407330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:34:31.779177  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:34:31.792657  407330 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:34:31.792726  407330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:34:31.808481  407330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:34:31.821835  407330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:34:31.953412  407330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:34:32.070663  407330 docker.go:234] disabling docker service ...
	I1210 06:34:32.070719  407330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:34:32.089582  407330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:34:32.103903  407330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:34:32.229247  407330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:34:32.354550  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:34:32.368208  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:34:32.383037  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:32.544686  407330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:34:32.544766  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.554538  407330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 06:34:32.554607  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.563600  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.572445  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.581785  407330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:34:32.589992  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.599257  407330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.607809  407330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:34:32.616790  407330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:34:32.624404  407330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:34:32.631884  407330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:34:32.742959  407330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:34:32.924926  407330 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:34:32.925015  407330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:34:32.931953  407330 start.go:564] Will wait 60s for crictl version
	I1210 06:34:32.932037  407330 ssh_runner.go:195] Run: which crictl
	I1210 06:34:32.936975  407330 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:34:32.972701  407330 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:34:32.972786  407330 ssh_runner.go:195] Run: crio --version
	I1210 06:34:33.008288  407330 ssh_runner.go:195] Run: crio --version
	I1210 06:34:33.045101  407330 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 06:34:33.048270  407330 cli_runner.go:164] Run: docker network inspect functional-253997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:34:33.065511  407330 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 06:34:33.072736  407330 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 06:34:33.075695  407330 kubeadm.go:884] updating cluster {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:34:33.075981  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:33.225944  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:33.376252  407330 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:34:33.530247  407330 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 06:34:33.530325  407330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:34:33.568941  407330 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:34:33.568954  407330 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:34:33.568960  407330 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 crio true true} ...
	I1210 06:34:33.569060  407330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-253997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:34:33.569145  407330 ssh_runner.go:195] Run: crio config
	I1210 06:34:33.643186  407330 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 06:34:33.643211  407330 cni.go:84] Creating CNI manager for ""
	I1210 06:34:33.643224  407330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:34:33.643242  407330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:34:33.643280  407330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-253997 NodeName:functional-253997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:34:33.643429  407330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-253997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:34:33.643524  407330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:34:33.653419  407330 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:34:33.653495  407330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:34:33.663141  407330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 06:34:33.678587  407330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:34:33.693949  407330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I1210 06:34:33.710464  407330 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:34:33.714723  407330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:34:33.827439  407330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:34:34.376520  407330 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997 for IP: 192.168.49.2
	I1210 06:34:34.376531  407330 certs.go:195] generating shared ca certs ...
	I1210 06:34:34.376561  407330 certs.go:227] acquiring lock for ca certs: {Name:mk6fdc2cadbd147112797261b2432f4e1e90b685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:34:34.376695  407330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key
	I1210 06:34:34.376739  407330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key
	I1210 06:34:34.376746  407330 certs.go:257] generating profile certs ...
	I1210 06:34:34.376830  407330 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.key
	I1210 06:34:34.376883  407330 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key.d56e9423
	I1210 06:34:34.376918  407330 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key
	I1210 06:34:34.377046  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem (1338 bytes)
	W1210 06:34:34.377076  407330 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265_empty.pem, impossibly tiny 0 bytes
	I1210 06:34:34.377083  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:34:34.377112  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:34:34.377138  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:34:34.377165  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem (1675 bytes)
	I1210 06:34:34.377235  407330 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 06:34:34.377907  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:34:34.400957  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:34:34.422626  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:34:34.444886  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:34:34.463194  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:34:34.485380  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:34:34.504994  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:34:34.523903  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:34:34.542693  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem --> /usr/share/ca-certificates/364265.pem (1338 bytes)
	I1210 06:34:34.560781  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /usr/share/ca-certificates/3642652.pem (1708 bytes)
	I1210 06:34:34.580039  407330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:34:34.598952  407330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:34:34.612103  407330 ssh_runner.go:195] Run: openssl version
	I1210 06:34:34.618607  407330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.626715  407330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3642652.pem /etc/ssl/certs/3642652.pem
	I1210 06:34:34.634462  407330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.638500  407330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.638572  407330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3642652.pem
	I1210 06:34:34.680023  407330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:34:34.687891  407330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.695733  407330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:34:34.704338  407330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.708573  407330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.708632  407330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:34:34.750214  407330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:34:34.758402  407330 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.766563  407330 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364265.pem /etc/ssl/certs/364265.pem
	I1210 06:34:34.774837  407330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.779114  407330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.779177  407330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364265.pem
	I1210 06:34:34.821136  407330 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:34:34.829270  407330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:34:34.833529  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:34:34.876277  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:34:34.917707  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:34:34.959457  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:34:35.001865  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:34:35.044914  407330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:34:35.086921  407330 kubeadm.go:401] StartCluster: {Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:34:35.087016  407330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:34:35.087089  407330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:34:35.117459  407330 cri.go:89] found id: ""
	I1210 06:34:35.117522  407330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:34:35.127607  407330 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:34:35.127629  407330 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:34:35.127685  407330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:34:35.136902  407330 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.137526  407330 kubeconfig.go:125] found "functional-253997" server: "https://192.168.49.2:8441"
	I1210 06:34:35.138779  407330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:34:35.148051  407330 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 06:19:55.285285887 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 06:34:33.703709051 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 06:34:35.148070  407330 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:34:35.148082  407330 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 06:34:35.148140  407330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:34:35.178671  407330 cri.go:89] found id: ""
	I1210 06:34:35.178737  407330 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:34:35.196838  407330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:34:35.205412  407330 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 10 06:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 10 06:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 06:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 10 06:24 /etc/kubernetes/scheduler.conf
	
	I1210 06:34:35.205484  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:34:35.213947  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:34:35.222529  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.222599  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:34:35.230587  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:34:35.239174  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.239260  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:34:35.247436  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:34:35.255726  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:34:35.255785  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:34:35.264394  407330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:34:35.273245  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:35.319550  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.241705  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.453815  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.521107  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:34:36.566051  407330 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:34:36.566126  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:37.067292  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:37.566512  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:38.066836  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:38.566899  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:39.066341  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:39.566346  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:40.066332  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:40.566372  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:41.066499  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:41.566268  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:42.066346  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:42.567303  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:43.066665  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:43.567003  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:44.067024  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:44.566335  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:45.066417  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:45.567077  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:46.066880  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:46.567080  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:47.067184  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:47.567178  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:48.066963  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:48.566974  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:49.067037  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:49.566287  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:50.066336  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:50.566364  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:51.067235  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:51.566986  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:52.067009  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:52.567206  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:53.067261  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:53.566344  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:54.066310  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:54.566298  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:55.066264  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:55.567074  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:56.066263  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:56.566336  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:57.066335  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:57.566328  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:58.067273  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:58.566628  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:59.066382  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:34:59.566689  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:00.067148  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:00.566514  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:01.067178  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:01.566354  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:02.066731  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:02.566399  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:03.066319  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:03.566548  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:04.067174  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:04.566325  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:05.066402  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:05.566911  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:06.066322  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:06.566332  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:07.066357  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:07.566349  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:08.066401  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:08.566901  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:09.066304  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:09.566288  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:10.067048  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:10.566583  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:11.066369  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:11.566359  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:12.066308  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:12.566316  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:13.067242  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:13.566381  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:14.066924  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:14.566356  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:15.066288  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:15.566336  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:16.066320  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:16.566227  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:17.066312  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:17.567213  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:18.067248  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:18.566316  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:19.066386  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:19.566330  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:20.066351  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:20.567009  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:21.066262  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:21.566459  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:22.067279  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:22.567207  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:23.066320  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:23.566322  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:24.066326  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:24.567019  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:25.066297  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:25.566495  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:26.066321  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:26.566348  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:27.066383  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:27.566446  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:28.066328  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:28.566352  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:29.066994  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:29.566974  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:30.067021  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:30.566389  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:31.066477  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:31.567070  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:32.067017  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:32.566317  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:33.066608  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:33.566260  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:34.066340  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:34.566882  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:35.066828  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:35.566890  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:36.066318  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:36.566330  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:36.566414  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:36.592227  407330 cri.go:89] found id: ""
	I1210 06:35:36.592241  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.592248  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:36.592253  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:36.592312  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:36.622028  407330 cri.go:89] found id: ""
	I1210 06:35:36.622043  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.622051  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:36.622056  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:36.622116  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:36.648208  407330 cri.go:89] found id: ""
	I1210 06:35:36.648226  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.648234  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:36.648240  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:36.648298  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:36.674377  407330 cri.go:89] found id: ""
	I1210 06:35:36.674397  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.674405  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:36.674410  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:36.674471  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:36.699772  407330 cri.go:89] found id: ""
	I1210 06:35:36.699787  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.699794  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:36.699801  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:36.699864  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:36.724815  407330 cri.go:89] found id: ""
	I1210 06:35:36.724830  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.724838  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:36.724843  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:36.724900  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:36.750775  407330 cri.go:89] found id: ""
	I1210 06:35:36.750791  407330 logs.go:282] 0 containers: []
	W1210 06:35:36.750798  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:36.750806  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:36.750820  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:36.820446  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:36.820465  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:36.835955  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:36.835970  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:36.903411  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:36.895055   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.895887   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.897562   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.898101   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.899639   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:36.895055   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.895887   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.897562   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.898101   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:36.899639   11639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:36.903424  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:36.903435  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:36.979747  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:36.979768  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:39.514581  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:39.524909  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:39.524970  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:39.550102  407330 cri.go:89] found id: ""
	I1210 06:35:39.550116  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.550124  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:39.550129  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:39.550187  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:39.576588  407330 cri.go:89] found id: ""
	I1210 06:35:39.576602  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.576619  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:39.576624  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:39.576690  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:39.603288  407330 cri.go:89] found id: ""
	I1210 06:35:39.603303  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.603310  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:39.603315  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:39.603373  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:39.632338  407330 cri.go:89] found id: ""
	I1210 06:35:39.632353  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.632360  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:39.632365  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:39.632420  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:39.657752  407330 cri.go:89] found id: ""
	I1210 06:35:39.657767  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.657773  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:39.657779  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:39.657844  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:39.683212  407330 cri.go:89] found id: ""
	I1210 06:35:39.683226  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.683234  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:39.683240  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:39.683300  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:39.708413  407330 cri.go:89] found id: ""
	I1210 06:35:39.708437  407330 logs.go:282] 0 containers: []
	W1210 06:35:39.708445  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:39.708453  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:39.708464  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:39.775637  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:39.775659  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:39.791086  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:39.791102  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:39.857652  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:39.849379   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.849925   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.851713   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.852318   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.853965   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:39.849379   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.849925   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.851713   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.852318   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:39.853965   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:39.857663  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:39.857675  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:39.935547  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:39.935569  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:42.469375  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:42.480182  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:42.480240  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:42.506760  407330 cri.go:89] found id: ""
	I1210 06:35:42.506774  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.506781  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:42.506786  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:42.506843  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:42.536234  407330 cri.go:89] found id: ""
	I1210 06:35:42.536249  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.536256  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:42.536261  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:42.536329  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:42.566988  407330 cri.go:89] found id: ""
	I1210 06:35:42.567003  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.567010  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:42.567015  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:42.567076  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:42.592607  407330 cri.go:89] found id: ""
	I1210 06:35:42.592630  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.592638  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:42.592643  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:42.592709  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:42.617649  407330 cri.go:89] found id: ""
	I1210 06:35:42.617664  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.617671  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:42.617676  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:42.617734  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:42.643410  407330 cri.go:89] found id: ""
	I1210 06:35:42.643425  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.643432  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:42.643437  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:42.643503  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:42.669531  407330 cri.go:89] found id: ""
	I1210 06:35:42.669546  407330 logs.go:282] 0 containers: []
	W1210 06:35:42.669553  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:42.669561  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:42.669571  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:42.735924  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:42.735944  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:42.751205  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:42.751229  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:42.816158  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:42.807932   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.808831   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810433   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810980   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.812516   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:42.807932   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.808831   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810433   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.810980   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:42.812516   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:42.816169  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:42.816179  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:42.893021  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:42.893042  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:45.426224  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:45.438079  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:45.438148  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:45.472267  407330 cri.go:89] found id: ""
	I1210 06:35:45.472291  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.472299  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:45.472306  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:45.472384  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:45.502901  407330 cri.go:89] found id: ""
	I1210 06:35:45.502931  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.502939  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:45.502945  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:45.503008  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:45.529442  407330 cri.go:89] found id: ""
	I1210 06:35:45.529458  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.529465  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:45.529470  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:45.529534  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:45.555125  407330 cri.go:89] found id: ""
	I1210 06:35:45.555139  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.555159  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:45.555165  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:45.555243  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:45.580961  407330 cri.go:89] found id: ""
	I1210 06:35:45.580976  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.580994  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:45.580999  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:45.581057  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:45.610965  407330 cri.go:89] found id: ""
	I1210 06:35:45.610980  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.610987  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:45.610993  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:45.611059  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:45.637091  407330 cri.go:89] found id: ""
	I1210 06:35:45.637105  407330 logs.go:282] 0 containers: []
	W1210 06:35:45.637120  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:45.637128  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:45.637137  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:45.715413  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:45.715435  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:45.749154  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:45.749171  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:45.815517  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:45.815543  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:45.831429  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:45.831446  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:45.906374  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:45.898629   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.899395   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.900929   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.901506   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.902971   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:45.898629   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.899395   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.900929   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.901506   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:45.902971   11964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:48.406578  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:48.421255  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:48.421324  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:48.447131  407330 cri.go:89] found id: ""
	I1210 06:35:48.447146  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.447153  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:48.447159  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:48.447220  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:48.473099  407330 cri.go:89] found id: ""
	I1210 06:35:48.473122  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.473129  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:48.473134  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:48.473222  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:48.498597  407330 cri.go:89] found id: ""
	I1210 06:35:48.498612  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.498619  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:48.498624  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:48.498681  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:48.523362  407330 cri.go:89] found id: ""
	I1210 06:35:48.523377  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.523384  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:48.523389  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:48.523453  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:48.551807  407330 cri.go:89] found id: ""
	I1210 06:35:48.551821  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.551835  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:48.551840  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:48.551900  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:48.581473  407330 cri.go:89] found id: ""
	I1210 06:35:48.581487  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.581502  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:48.581509  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:48.581565  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:48.607499  407330 cri.go:89] found id: ""
	I1210 06:35:48.607514  407330 logs.go:282] 0 containers: []
	W1210 06:35:48.607521  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:48.607529  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:48.607539  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:48.673753  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:48.673774  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:48.688837  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:48.688853  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:48.751707  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:48.744029   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.744581   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746111   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746590   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.748085   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:48.744029   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.744581   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746111   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.746590   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:48.748085   12056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:48.751717  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:48.751727  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:48.828663  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:48.828686  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:51.363003  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:51.376217  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:51.376312  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:51.407718  407330 cri.go:89] found id: ""
	I1210 06:35:51.407732  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.407755  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:51.407762  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:51.407874  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:51.444235  407330 cri.go:89] found id: ""
	I1210 06:35:51.444269  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.444286  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:51.444295  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:51.444379  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:51.474869  407330 cri.go:89] found id: ""
	I1210 06:35:51.474883  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.474890  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:51.474895  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:51.474953  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:51.504739  407330 cri.go:89] found id: ""
	I1210 06:35:51.504764  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.504772  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:51.504777  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:51.504846  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:51.532353  407330 cri.go:89] found id: ""
	I1210 06:35:51.532368  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.532375  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:51.532380  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:51.532455  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:51.557565  407330 cri.go:89] found id: ""
	I1210 06:35:51.557579  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.557586  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:51.557591  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:51.557661  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:51.583285  407330 cri.go:89] found id: ""
	I1210 06:35:51.583300  407330 logs.go:282] 0 containers: []
	W1210 06:35:51.583307  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:51.583315  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:51.583325  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:51.613387  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:51.613404  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:51.680028  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:51.680049  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:51.695935  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:51.695952  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:51.759280  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:51.750909   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.751756   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753321   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753933   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.755583   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:51.750909   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.751756   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753321   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.753933   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:51.755583   12176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:51.759290  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:51.759301  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:54.338519  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:54.348725  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:54.348780  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:54.383598  407330 cri.go:89] found id: ""
	I1210 06:35:54.383626  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.383634  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:54.383639  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:54.383707  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:54.410152  407330 cri.go:89] found id: ""
	I1210 06:35:54.410180  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.410187  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:54.410192  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:54.410264  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:54.438326  407330 cri.go:89] found id: ""
	I1210 06:35:54.438352  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.438360  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:54.438365  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:54.438441  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:54.465850  407330 cri.go:89] found id: ""
	I1210 06:35:54.465864  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.465871  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:54.465876  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:54.465931  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:54.491709  407330 cri.go:89] found id: ""
	I1210 06:35:54.491722  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.491729  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:54.491734  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:54.491790  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:54.523425  407330 cri.go:89] found id: ""
	I1210 06:35:54.523440  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.523447  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:54.523452  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:54.523548  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:54.550380  407330 cri.go:89] found id: ""
	I1210 06:35:54.550394  407330 logs.go:282] 0 containers: []
	W1210 06:35:54.550411  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:54.550438  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:54.550449  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:35:54.582306  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:54.582324  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:54.647908  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:54.647927  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:54.663750  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:54.663772  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:54.730309  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:54.719958   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.720734   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.722315   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.724777   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.725551   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:54.719958   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.720734   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.722315   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.724777   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:54.725551   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:54.730320  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:54.730331  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:57.308665  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:35:57.320319  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:35:57.320392  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:35:57.345562  407330 cri.go:89] found id: ""
	I1210 06:35:57.345577  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.345584  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:35:57.345589  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:35:57.345647  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:35:57.371859  407330 cri.go:89] found id: ""
	I1210 06:35:57.371874  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.371897  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:35:57.371903  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:35:57.371970  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:35:57.406362  407330 cri.go:89] found id: ""
	I1210 06:35:57.406377  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.406384  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:35:57.406389  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:35:57.406463  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:35:57.436087  407330 cri.go:89] found id: ""
	I1210 06:35:57.436103  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.436110  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:35:57.436116  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:35:57.436187  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:35:57.465764  407330 cri.go:89] found id: ""
	I1210 06:35:57.465779  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.465786  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:35:57.465791  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:35:57.465867  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:35:57.494039  407330 cri.go:89] found id: ""
	I1210 06:35:57.494065  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.494073  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:35:57.494078  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:35:57.494145  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:35:57.520097  407330 cri.go:89] found id: ""
	I1210 06:35:57.520123  407330 logs.go:282] 0 containers: []
	W1210 06:35:57.520131  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:35:57.520140  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:35:57.520151  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:35:57.586496  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:35:57.586517  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:35:57.602111  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:35:57.602128  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:35:57.668344  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:35:57.660356   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.660757   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.662627   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.663077   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.664745   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:35:57.660356   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.660757   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.662627   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.663077   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:35:57.664745   12377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:35:57.668356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:35:57.668367  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:35:57.746160  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:35:57.746183  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:00.275712  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:00.321874  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:00.321955  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:00.384327  407330 cri.go:89] found id: ""
	I1210 06:36:00.384343  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.384351  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:00.384357  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:00.384451  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:00.459817  407330 cri.go:89] found id: ""
	I1210 06:36:00.459834  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.459842  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:00.459848  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:00.459916  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:00.497674  407330 cri.go:89] found id: ""
	I1210 06:36:00.497690  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.497698  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:00.497704  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:00.497774  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:00.541499  407330 cri.go:89] found id: ""
	I1210 06:36:00.541516  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.541525  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:00.541531  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:00.541613  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:00.581412  407330 cri.go:89] found id: ""
	I1210 06:36:00.581436  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.581463  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:00.581468  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:00.581541  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:00.610779  407330 cri.go:89] found id: ""
	I1210 06:36:00.610795  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.610802  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:00.610807  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:00.610870  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:00.642543  407330 cri.go:89] found id: ""
	I1210 06:36:00.642559  407330 logs.go:282] 0 containers: []
	W1210 06:36:00.642567  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:00.642575  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:00.642586  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:00.710346  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:00.710367  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:00.725875  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:00.725894  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:00.793058  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:00.784782   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.785552   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787181   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787496   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.789654   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:00.784782   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.785552   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787181   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.787496   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:00.789654   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:00.793071  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:00.793084  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:00.875916  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:00.875944  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:03.406417  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:03.419044  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:03.419120  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:03.447628  407330 cri.go:89] found id: ""
	I1210 06:36:03.447658  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.447666  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:03.447671  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:03.447737  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:03.474253  407330 cri.go:89] found id: ""
	I1210 06:36:03.474266  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.474274  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:03.474279  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:03.474336  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:03.500678  407330 cri.go:89] found id: ""
	I1210 06:36:03.500694  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.500701  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:03.500707  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:03.500768  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:03.528282  407330 cri.go:89] found id: ""
	I1210 06:36:03.528298  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.528306  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:03.528311  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:03.528373  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:03.556656  407330 cri.go:89] found id: ""
	I1210 06:36:03.556670  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.556678  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:03.556683  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:03.556743  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:03.583735  407330 cri.go:89] found id: ""
	I1210 06:36:03.583750  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.583758  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:03.583763  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:03.583819  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:03.609076  407330 cri.go:89] found id: ""
	I1210 06:36:03.609090  407330 logs.go:282] 0 containers: []
	W1210 06:36:03.609097  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:03.609105  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:03.609115  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:03.686817  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:03.686837  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:03.716372  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:03.716389  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:03.784121  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:03.784140  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:03.799951  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:03.799970  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:03.868350  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:03.860456   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.860967   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.862601   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.863024   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.864532   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:03.860456   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.860967   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.862601   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.863024   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:03.864532   12595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:06.369008  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:06.379783  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:06.379844  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:06.413424  407330 cri.go:89] found id: ""
	I1210 06:36:06.413438  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.413452  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:06.413457  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:06.413518  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:06.455432  407330 cri.go:89] found id: ""
	I1210 06:36:06.455446  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.455453  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:06.455458  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:06.455518  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:06.484987  407330 cri.go:89] found id: ""
	I1210 06:36:06.485002  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.485011  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:06.485016  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:06.485079  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:06.510864  407330 cri.go:89] found id: ""
	I1210 06:36:06.510879  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.510887  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:06.510892  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:06.510955  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:06.536841  407330 cri.go:89] found id: ""
	I1210 06:36:06.536856  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.536863  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:06.536868  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:06.536928  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:06.563896  407330 cri.go:89] found id: ""
	I1210 06:36:06.563911  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.563918  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:06.563923  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:06.563982  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:06.588959  407330 cri.go:89] found id: ""
	I1210 06:36:06.588973  407330 logs.go:282] 0 containers: []
	W1210 06:36:06.588981  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:06.588988  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:06.588998  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:06.665721  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:06.665743  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:06.694509  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:06.694527  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:06.761392  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:06.761412  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:06.776431  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:06.776448  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:06.839723  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:06.830973   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.831471   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.833376   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.834107   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.835971   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:06.830973   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.831471   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.833376   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.834107   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:06.835971   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:09.340200  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:09.350423  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:09.350492  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:09.377180  407330 cri.go:89] found id: ""
	I1210 06:36:09.377216  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.377224  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:09.377229  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:09.377296  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:09.408780  407330 cri.go:89] found id: ""
	I1210 06:36:09.408794  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.408810  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:09.408817  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:09.408891  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:09.439014  407330 cri.go:89] found id: ""
	I1210 06:36:09.439028  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.439046  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:09.439051  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:09.439123  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:09.465550  407330 cri.go:89] found id: ""
	I1210 06:36:09.465570  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.465577  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:09.465582  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:09.465640  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:09.495077  407330 cri.go:89] found id: ""
	I1210 06:36:09.495092  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.495099  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:09.495104  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:09.495160  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:09.524259  407330 cri.go:89] found id: ""
	I1210 06:36:09.524283  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.524291  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:09.524296  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:09.524365  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:09.552397  407330 cri.go:89] found id: ""
	I1210 06:36:09.552411  407330 logs.go:282] 0 containers: []
	W1210 06:36:09.552428  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:09.552435  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:09.552445  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:09.617989  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:09.618009  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:09.633375  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:09.633391  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:09.703345  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:09.695844   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.696518   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.697933   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.698390   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.699860   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:09.695844   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.696518   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.697933   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.698390   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:09.699860   12789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:09.703356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:09.703368  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:09.780941  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:09.780963  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:12.311981  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:12.322588  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:12.322649  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:12.348408  407330 cri.go:89] found id: ""
	I1210 06:36:12.348423  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.348430  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:12.348436  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:12.348494  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:12.381450  407330 cri.go:89] found id: ""
	I1210 06:36:12.381465  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.381492  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:12.381497  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:12.381565  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:12.421286  407330 cri.go:89] found id: ""
	I1210 06:36:12.421301  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.421309  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:12.421314  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:12.421381  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:12.453573  407330 cri.go:89] found id: ""
	I1210 06:36:12.453598  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.453605  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:12.453611  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:12.453677  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:12.480195  407330 cri.go:89] found id: ""
	I1210 06:36:12.480210  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.480218  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:12.480225  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:12.480290  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:12.505648  407330 cri.go:89] found id: ""
	I1210 06:36:12.505662  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.505669  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:12.505674  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:12.505732  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:12.532083  407330 cri.go:89] found id: ""
	I1210 06:36:12.532097  407330 logs.go:282] 0 containers: []
	W1210 06:36:12.532104  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:12.532112  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:12.532125  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:12.598623  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:12.598646  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:12.614317  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:12.614336  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:12.686805  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:12.678148   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.678809   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.680931   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.681479   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.683127   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:12.678148   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.678809   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.680931   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.681479   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:12.683127   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:12.686817  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:12.686828  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:12.768698  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:12.768719  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:15.302091  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:15.312582  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:15.312644  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:15.338874  407330 cri.go:89] found id: ""
	I1210 06:36:15.338889  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.338897  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:15.338902  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:15.338962  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:15.365600  407330 cri.go:89] found id: ""
	I1210 06:36:15.365614  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.365621  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:15.365627  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:15.365687  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:15.405324  407330 cri.go:89] found id: ""
	I1210 06:36:15.405339  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.405346  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:15.405352  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:15.405411  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:15.438276  407330 cri.go:89] found id: ""
	I1210 06:36:15.438290  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.438298  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:15.438304  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:15.438362  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:15.465120  407330 cri.go:89] found id: ""
	I1210 06:36:15.465135  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.465142  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:15.465147  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:15.465226  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:15.490880  407330 cri.go:89] found id: ""
	I1210 06:36:15.490894  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.490901  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:15.490906  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:15.490968  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:15.517171  407330 cri.go:89] found id: ""
	I1210 06:36:15.517208  407330 logs.go:282] 0 containers: []
	W1210 06:36:15.517215  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:15.517224  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:15.517235  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:15.580940  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:15.573253   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.573879   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575387   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575909   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.577385   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:15.573253   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.573879   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575387   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.575909   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:15.577385   12989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:15.580950  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:15.580962  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:15.657832  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:15.657853  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:15.690721  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:15.690738  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:15.755970  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:15.755993  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:18.272507  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:18.282762  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:18.282822  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:18.312952  407330 cri.go:89] found id: ""
	I1210 06:36:18.312966  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.312980  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:18.312986  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:18.313048  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:18.340174  407330 cri.go:89] found id: ""
	I1210 06:36:18.340189  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.340196  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:18.340201  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:18.340260  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:18.365096  407330 cri.go:89] found id: ""
	I1210 06:36:18.365111  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.365118  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:18.365122  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:18.365178  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:18.408189  407330 cri.go:89] found id: ""
	I1210 06:36:18.408203  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.408210  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:18.408215  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:18.408271  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:18.439330  407330 cri.go:89] found id: ""
	I1210 06:36:18.439344  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.439351  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:18.439357  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:18.439413  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:18.471472  407330 cri.go:89] found id: ""
	I1210 06:36:18.471486  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.471493  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:18.471498  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:18.471561  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:18.499541  407330 cri.go:89] found id: ""
	I1210 06:36:18.499555  407330 logs.go:282] 0 containers: []
	W1210 06:36:18.499562  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:18.499569  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:18.499579  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:18.566266  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:18.566288  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:18.581335  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:18.581351  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:18.649633  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:18.642053   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.642777   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644268   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644684   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.646194   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:18.642053   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.642777   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644268   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.644684   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:18.646194   13097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:18.649644  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:18.649657  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:18.727427  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:18.727447  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:21.256173  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:21.266342  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:21.266401  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:21.291198  407330 cri.go:89] found id: ""
	I1210 06:36:21.291212  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.291219  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:21.291224  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:21.291285  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:21.317809  407330 cri.go:89] found id: ""
	I1210 06:36:21.317823  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.317831  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:21.317836  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:21.317893  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:21.349023  407330 cri.go:89] found id: ""
	I1210 06:36:21.349038  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.349046  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:21.349051  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:21.349112  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:21.377021  407330 cri.go:89] found id: ""
	I1210 06:36:21.377036  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.377043  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:21.377049  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:21.377128  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:21.414828  407330 cri.go:89] found id: ""
	I1210 06:36:21.414843  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.414853  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:21.414858  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:21.414924  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:21.448750  407330 cri.go:89] found id: ""
	I1210 06:36:21.448765  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.448772  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:21.448778  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:21.448836  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:21.475060  407330 cri.go:89] found id: ""
	I1210 06:36:21.475082  407330 logs.go:282] 0 containers: []
	W1210 06:36:21.475089  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:21.475097  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:21.475109  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:21.544320  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:21.544350  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:21.559538  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:21.559554  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:21.623730  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:21.615598   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.616210   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.617863   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.618444   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.620054   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:21.615598   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.616210   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.617863   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.618444   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:21.620054   13203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:21.623741  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:21.623754  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:21.703706  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:21.703726  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:24.232360  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:24.242917  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:24.242977  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:24.272666  407330 cri.go:89] found id: ""
	I1210 06:36:24.272681  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.272688  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:24.272693  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:24.272762  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:24.298359  407330 cri.go:89] found id: ""
	I1210 06:36:24.298374  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.298381  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:24.298386  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:24.298448  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:24.324096  407330 cri.go:89] found id: ""
	I1210 06:36:24.324110  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.324117  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:24.324122  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:24.324180  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:24.352195  407330 cri.go:89] found id: ""
	I1210 06:36:24.352210  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.352217  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:24.352223  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:24.352281  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:24.392094  407330 cri.go:89] found id: ""
	I1210 06:36:24.392109  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.392116  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:24.392121  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:24.392180  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:24.433688  407330 cri.go:89] found id: ""
	I1210 06:36:24.433702  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.433716  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:24.433721  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:24.433780  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:24.461088  407330 cri.go:89] found id: ""
	I1210 06:36:24.461103  407330 logs.go:282] 0 containers: []
	W1210 06:36:24.461110  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:24.461118  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:24.461140  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:24.491187  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:24.491203  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:24.557420  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:24.557442  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:24.572719  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:24.572736  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:24.638182  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:24.629865   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.630603   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632247   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632788   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.634516   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:24.629865   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.630603   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632247   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.632788   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:24.634516   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:24.638192  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:24.638204  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:27.215263  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:27.225429  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:27.225490  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:27.250600  407330 cri.go:89] found id: ""
	I1210 06:36:27.250623  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.250630  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:27.250636  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:27.250696  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:27.275244  407330 cri.go:89] found id: ""
	I1210 06:36:27.275258  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.275266  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:27.275271  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:27.275337  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:27.303675  407330 cri.go:89] found id: ""
	I1210 06:36:27.303699  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.303707  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:27.303713  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:27.303779  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:27.329179  407330 cri.go:89] found id: ""
	I1210 06:36:27.329211  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.329219  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:27.329225  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:27.329294  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:27.354254  407330 cri.go:89] found id: ""
	I1210 06:36:27.354269  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.354276  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:27.354282  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:27.354340  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:27.386524  407330 cri.go:89] found id: ""
	I1210 06:36:27.386539  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.386546  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:27.386552  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:27.386608  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:27.419941  407330 cri.go:89] found id: ""
	I1210 06:36:27.419964  407330 logs.go:282] 0 containers: []
	W1210 06:36:27.419972  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:27.419980  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:27.419990  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:27.489413  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:27.489436  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:27.504358  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:27.504375  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:27.572076  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:27.564125   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.564752   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.566500   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.567122   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.568559   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:27.564125   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.564752   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.566500   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.567122   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:27.568559   13407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:27.572087  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:27.572097  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:27.652684  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:27.652704  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:30.186931  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:30.198655  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:30.198720  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:30.226217  407330 cri.go:89] found id: ""
	I1210 06:36:30.226239  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.226247  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:30.226252  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:30.226319  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:30.254245  407330 cri.go:89] found id: ""
	I1210 06:36:30.254261  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.254268  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:30.254273  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:30.254331  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:30.282139  407330 cri.go:89] found id: ""
	I1210 06:36:30.282154  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.282162  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:30.282167  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:30.282227  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:30.308968  407330 cri.go:89] found id: ""
	I1210 06:36:30.308992  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.308999  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:30.309005  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:30.309076  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:30.337543  407330 cri.go:89] found id: ""
	I1210 06:36:30.337558  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.337565  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:30.337570  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:30.337630  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:30.366448  407330 cri.go:89] found id: ""
	I1210 06:36:30.366463  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.366477  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:30.366483  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:30.366542  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:30.404619  407330 cri.go:89] found id: ""
	I1210 06:36:30.404641  407330 logs.go:282] 0 containers: []
	W1210 06:36:30.404649  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:30.404656  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:30.404667  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:30.484453  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:30.484481  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:30.499101  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:30.499118  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:30.561567  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:30.553438   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.554141   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.555797   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.556329   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.557890   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:30.553438   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.554141   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.555797   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.556329   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:30.557890   13516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:30.561578  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:30.561589  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:30.638801  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:30.638822  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:33.169370  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:33.179597  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:33.179662  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:33.204216  407330 cri.go:89] found id: ""
	I1210 06:36:33.204230  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.204246  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:33.204252  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:33.204309  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:33.229498  407330 cri.go:89] found id: ""
	I1210 06:36:33.229512  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.229519  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:33.229524  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:33.229580  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:33.255490  407330 cri.go:89] found id: ""
	I1210 06:36:33.255505  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.255521  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:33.255527  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:33.255593  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:33.283936  407330 cri.go:89] found id: ""
	I1210 06:36:33.283960  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.283968  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:33.283974  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:33.284052  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:33.308959  407330 cri.go:89] found id: ""
	I1210 06:36:33.308974  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.308984  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:33.308990  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:33.309058  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:33.335830  407330 cri.go:89] found id: ""
	I1210 06:36:33.335853  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.335860  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:33.335866  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:33.335936  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:33.362154  407330 cri.go:89] found id: ""
	I1210 06:36:33.362179  407330 logs.go:282] 0 containers: []
	W1210 06:36:33.362187  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:33.362196  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:33.362208  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:33.410395  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:33.410413  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:33.480770  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:33.480789  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:33.496511  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:33.496527  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:33.563939  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:33.556146   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.556663   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558166   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558668   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.560192   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:33.556146   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.556663   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558166   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.558668   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:33.560192   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:33.563950  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:33.563961  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:36.141828  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:36.152734  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:36.152795  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:36.178688  407330 cri.go:89] found id: ""
	I1210 06:36:36.178703  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.178710  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:36.178716  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:36.178776  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:36.205685  407330 cri.go:89] found id: ""
	I1210 06:36:36.205700  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.205707  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:36.205712  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:36.205771  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:36.231383  407330 cri.go:89] found id: ""
	I1210 06:36:36.231398  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.231411  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:36.231418  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:36.231480  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:36.257291  407330 cri.go:89] found id: ""
	I1210 06:36:36.257316  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.257324  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:36.257329  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:36.257400  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:36.287683  407330 cri.go:89] found id: ""
	I1210 06:36:36.287697  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.287704  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:36.287709  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:36.287767  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:36.313785  407330 cri.go:89] found id: ""
	I1210 06:36:36.313799  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.313807  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:36.313812  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:36.313871  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:36.339325  407330 cri.go:89] found id: ""
	I1210 06:36:36.339339  407330 logs.go:282] 0 containers: []
	W1210 06:36:36.339347  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:36.339356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:36.339369  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:36.421249  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:36.421268  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:36.458225  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:36.458243  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:36.528365  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:36.528384  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:36.544683  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:36.544705  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:36.611624  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:36.602655   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.603473   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605101   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605888   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.607572   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:36.602655   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.603473   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605101   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.605888   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:36.607572   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:39.111891  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:39.122952  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:39.123016  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:39.151788  407330 cri.go:89] found id: ""
	I1210 06:36:39.151817  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.151825  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:39.151831  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:39.151902  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:39.176656  407330 cri.go:89] found id: ""
	I1210 06:36:39.176679  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.176686  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:39.176691  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:39.176759  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:39.203206  407330 cri.go:89] found id: ""
	I1210 06:36:39.203220  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.203227  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:39.203233  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:39.203289  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:39.228848  407330 cri.go:89] found id: ""
	I1210 06:36:39.228862  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.228869  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:39.228875  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:39.228933  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:39.258475  407330 cri.go:89] found id: ""
	I1210 06:36:39.258512  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.258519  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:39.258524  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:39.258589  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:39.283240  407330 cri.go:89] found id: ""
	I1210 06:36:39.283254  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.283261  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:39.283268  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:39.283328  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:39.312591  407330 cri.go:89] found id: ""
	I1210 06:36:39.312604  407330 logs.go:282] 0 containers: []
	W1210 06:36:39.312611  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:39.312619  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:39.312629  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:39.380680  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:39.380703  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:39.397793  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:39.397809  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:39.469117  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:39.460579   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.461325   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463132   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463721   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.465358   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:39.460579   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.461325   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463132   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.463721   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:39.465358   13834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:39.469128  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:39.469139  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:39.546111  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:39.546131  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:42.076431  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:42.089265  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:42.089335  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:42.121496  407330 cri.go:89] found id: ""
	I1210 06:36:42.121512  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.121520  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:42.121526  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:42.121593  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:42.151688  407330 cri.go:89] found id: ""
	I1210 06:36:42.151704  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.151712  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:42.151717  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:42.151784  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:42.190925  407330 cri.go:89] found id: ""
	I1210 06:36:42.190942  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.190949  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:42.190955  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:42.191063  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:42.225827  407330 cri.go:89] found id: ""
	I1210 06:36:42.225849  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.225857  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:42.225863  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:42.225931  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:42.254453  407330 cri.go:89] found id: ""
	I1210 06:36:42.254467  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.254475  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:42.254480  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:42.254557  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:42.281514  407330 cri.go:89] found id: ""
	I1210 06:36:42.281536  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.281545  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:42.281550  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:42.281615  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:42.309082  407330 cri.go:89] found id: ""
	I1210 06:36:42.309097  407330 logs.go:282] 0 containers: []
	W1210 06:36:42.309105  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:42.309115  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:42.309127  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:42.325376  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:42.325393  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:42.394971  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:42.386397   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.387396   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389262   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389603   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.390932   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:42.386397   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.387396   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389262   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.389603   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:42.390932   13931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:42.394982  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:42.394993  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:42.480444  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:42.480463  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:42.513077  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:42.513094  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:45.082079  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:45.095928  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:45.096005  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:45.136147  407330 cri.go:89] found id: ""
	I1210 06:36:45.136165  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.136172  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:45.136178  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:45.136321  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:45.171561  407330 cri.go:89] found id: ""
	I1210 06:36:45.171577  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.171584  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:45.171590  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:45.171667  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:45.214225  407330 cri.go:89] found id: ""
	I1210 06:36:45.214243  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.214277  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:45.214282  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:45.214364  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:45.274027  407330 cri.go:89] found id: ""
	I1210 06:36:45.274044  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.274052  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:45.274058  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:45.274128  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:45.321536  407330 cri.go:89] found id: ""
	I1210 06:36:45.321553  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.321561  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:45.321567  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:45.321719  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:45.355270  407330 cri.go:89] found id: ""
	I1210 06:36:45.355285  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.355303  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:45.355310  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:45.355386  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:45.388777  407330 cri.go:89] found id: ""
	I1210 06:36:45.388801  407330 logs.go:282] 0 containers: []
	W1210 06:36:45.388809  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:45.388817  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:45.388827  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:45.478699  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:45.478723  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:45.507903  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:45.507921  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:45.575844  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:45.575864  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:45.591861  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:45.591885  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:45.656312  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:45.648123   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.648663   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650406   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650984   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.652724   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:45.648123   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.648663   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650406   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.650984   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:45.652724   14058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:48.156556  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:48.166976  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:48.167036  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:48.192782  407330 cri.go:89] found id: ""
	I1210 06:36:48.192807  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.192817  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:48.192824  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:48.192889  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:48.218586  407330 cri.go:89] found id: ""
	I1210 06:36:48.218600  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.218607  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:48.218623  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:48.218682  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:48.244757  407330 cri.go:89] found id: ""
	I1210 06:36:48.244771  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.244778  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:48.244783  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:48.244841  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:48.271671  407330 cri.go:89] found id: ""
	I1210 06:36:48.271685  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.271692  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:48.271697  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:48.271756  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:48.298466  407330 cri.go:89] found id: ""
	I1210 06:36:48.298480  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.298487  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:48.298493  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:48.298603  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:48.324794  407330 cri.go:89] found id: ""
	I1210 06:36:48.324808  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.324825  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:48.324830  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:48.324888  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:48.351036  407330 cri.go:89] found id: ""
	I1210 06:36:48.351051  407330 logs.go:282] 0 containers: []
	W1210 06:36:48.351058  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:48.351065  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:48.351076  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:48.384287  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:48.384303  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:48.462134  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:48.462154  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:48.477421  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:48.477439  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:48.544257  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:48.535925   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.536728   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538380   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538978   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.540777   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:48.535925   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.536728   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538380   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.538978   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:48.540777   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:48.544268  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:48.544279  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:51.122102  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:51.133691  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:51.133753  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:51.161091  407330 cri.go:89] found id: ""
	I1210 06:36:51.161106  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.161113  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:51.161119  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:51.161217  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:51.189850  407330 cri.go:89] found id: ""
	I1210 06:36:51.189865  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.189872  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:51.189877  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:51.189944  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:51.215676  407330 cri.go:89] found id: ""
	I1210 06:36:51.215691  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.215698  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:51.215703  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:51.215763  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:51.241638  407330 cri.go:89] found id: ""
	I1210 06:36:51.241653  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.241660  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:51.241666  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:51.241728  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:51.266737  407330 cri.go:89] found id: ""
	I1210 06:36:51.266752  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.266759  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:51.266764  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:51.266823  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:51.291896  407330 cri.go:89] found id: ""
	I1210 06:36:51.291911  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.291918  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:51.291923  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:51.291982  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:51.317807  407330 cri.go:89] found id: ""
	I1210 06:36:51.317823  407330 logs.go:282] 0 containers: []
	W1210 06:36:51.317830  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:51.317838  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:51.317849  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:51.385260  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:51.385280  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:51.400443  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:51.400459  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:51.479768  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:51.472416   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.472835   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474317   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474686   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.476122   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:51.472416   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.472835   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474317   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.474686   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:51.476122   14260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:51.479778  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:51.479789  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:51.556275  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:51.556295  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:54.087759  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:54.098770  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:54.098837  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:54.124003  407330 cri.go:89] found id: ""
	I1210 06:36:54.124017  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.124025  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:54.124030  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:54.124091  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:54.150185  407330 cri.go:89] found id: ""
	I1210 06:36:54.150200  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.150207  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:54.150213  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:54.150272  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:54.177121  407330 cri.go:89] found id: ""
	I1210 06:36:54.177135  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.177143  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:54.177148  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:54.177248  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:54.202926  407330 cri.go:89] found id: ""
	I1210 06:36:54.202941  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.202948  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:54.202953  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:54.203013  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:54.232186  407330 cri.go:89] found id: ""
	I1210 06:36:54.232200  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.232215  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:54.232221  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:54.232291  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:54.257570  407330 cri.go:89] found id: ""
	I1210 06:36:54.257584  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.257592  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:54.257597  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:54.257656  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:54.282060  407330 cri.go:89] found id: ""
	I1210 06:36:54.282074  407330 logs.go:282] 0 containers: []
	W1210 06:36:54.282081  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:54.282088  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:54.282099  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:54.347704  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:54.347728  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:54.362634  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:54.362652  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:54.450702  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:54.442872   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.443270   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.444790   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.445114   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.446658   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:54.442872   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.443270   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.444790   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.445114   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:54.446658   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:54.450713  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:54.450723  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:36:54.528465  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:54.528487  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:57.060906  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:36:57.071228  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:36:57.071304  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:36:57.096846  407330 cri.go:89] found id: ""
	I1210 06:36:57.096859  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.096867  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:36:57.096872  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:36:57.096932  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:36:57.122828  407330 cri.go:89] found id: ""
	I1210 06:36:57.122845  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.122852  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:36:57.122858  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:36:57.122918  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:36:57.154708  407330 cri.go:89] found id: ""
	I1210 06:36:57.154723  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.154730  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:36:57.154736  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:36:57.154798  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:36:57.181521  407330 cri.go:89] found id: ""
	I1210 06:36:57.181543  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.181550  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:36:57.181556  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:36:57.181620  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:36:57.206722  407330 cri.go:89] found id: ""
	I1210 06:36:57.206736  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.206743  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:36:57.206749  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:36:57.206811  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:36:57.232129  407330 cri.go:89] found id: ""
	I1210 06:36:57.232143  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.232150  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:36:57.232155  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:36:57.232212  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:36:57.258044  407330 cri.go:89] found id: ""
	I1210 06:36:57.258057  407330 logs.go:282] 0 containers: []
	W1210 06:36:57.258064  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:36:57.258071  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:36:57.258081  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:36:57.285624  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:36:57.285640  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:36:57.351757  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:36:57.351778  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:36:57.367138  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:36:57.367157  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:36:57.458560  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:36:57.450875   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.451498   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453015   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453535   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.454978   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:36:57.450875   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.451498   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453015   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.453535   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:36:57.454978   14483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:36:57.458571  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:36:57.458582  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:00.035650  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:00.112450  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:00.112528  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:00.233350  407330 cri.go:89] found id: ""
	I1210 06:37:00.233368  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.233377  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:00.233383  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:00.233454  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:00.328120  407330 cri.go:89] found id: ""
	I1210 06:37:00.328136  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.328144  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:00.328150  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:00.328216  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:00.369964  407330 cri.go:89] found id: ""
	I1210 06:37:00.369981  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.369989  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:00.369995  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:00.370065  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:00.412610  407330 cri.go:89] found id: ""
	I1210 06:37:00.412628  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.412636  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:00.412642  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:00.412717  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:00.458193  407330 cri.go:89] found id: ""
	I1210 06:37:00.458212  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.458220  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:00.458225  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:00.458300  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:00.486825  407330 cri.go:89] found id: ""
	I1210 06:37:00.486840  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.486848  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:00.486853  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:00.486912  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:00.514588  407330 cri.go:89] found id: ""
	I1210 06:37:00.514604  407330 logs.go:282] 0 containers: []
	W1210 06:37:00.514612  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:00.514631  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:00.514643  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:00.544788  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:00.544807  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:00.611036  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:00.611058  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:00.625887  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:00.625904  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:00.692620  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:00.684863   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.685709   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687299   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687611   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.689091   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:00.684863   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.685709   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687299   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.687611   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:00.689091   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:00.692631  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:00.692642  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:03.270067  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:03.280541  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:03.280604  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:03.306695  407330 cri.go:89] found id: ""
	I1210 06:37:03.306710  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.306718  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:03.306724  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:03.306788  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:03.335215  407330 cri.go:89] found id: ""
	I1210 06:37:03.335230  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.335237  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:03.335243  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:03.335302  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:03.366128  407330 cri.go:89] found id: ""
	I1210 06:37:03.366143  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.366150  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:03.366155  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:03.366214  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:03.407867  407330 cri.go:89] found id: ""
	I1210 06:37:03.407883  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.407891  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:03.407896  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:03.407957  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:03.439688  407330 cri.go:89] found id: ""
	I1210 06:37:03.439703  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.439710  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:03.439716  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:03.439776  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:03.470617  407330 cri.go:89] found id: ""
	I1210 06:37:03.470633  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.470640  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:03.470645  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:03.470708  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:03.495476  407330 cri.go:89] found id: ""
	I1210 06:37:03.495491  407330 logs.go:282] 0 containers: []
	W1210 06:37:03.495498  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:03.495506  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:03.495516  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:03.562017  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:03.562037  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:03.577764  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:03.577782  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:03.644175  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:03.636471   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.637208   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.638686   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.639152   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.640632   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:03.636471   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.637208   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.638686   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.639152   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:03.640632   14688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:03.644187  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:03.644198  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:03.721903  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:03.721925  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:06.250929  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:06.261704  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:06.261767  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:06.290140  407330 cri.go:89] found id: ""
	I1210 06:37:06.290155  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.290163  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:06.290168  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:06.290226  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:06.315796  407330 cri.go:89] found id: ""
	I1210 06:37:06.315811  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.315819  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:06.315826  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:06.315884  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:06.340906  407330 cri.go:89] found id: ""
	I1210 06:37:06.340920  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.340927  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:06.340932  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:06.340996  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:06.367812  407330 cri.go:89] found id: ""
	I1210 06:37:06.367827  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.367835  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:06.367840  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:06.367899  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:06.401044  407330 cri.go:89] found id: ""
	I1210 06:37:06.401058  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.401065  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:06.401070  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:06.401166  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:06.438778  407330 cri.go:89] found id: ""
	I1210 06:37:06.438799  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.438806  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:06.438811  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:06.438892  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:06.466678  407330 cri.go:89] found id: ""
	I1210 06:37:06.466692  407330 logs.go:282] 0 containers: []
	W1210 06:37:06.466700  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:06.466708  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:06.466718  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:06.544177  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:06.544200  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:06.573010  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:06.573027  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:06.640533  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:06.640553  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:06.656110  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:06.656128  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:06.723670  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:06.715242   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.715893   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.717714   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.718356   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.720062   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:06.715242   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.715893   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.717714   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.718356   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:06.720062   14807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:09.224405  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:09.234680  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:09.234741  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:09.260264  407330 cri.go:89] found id: ""
	I1210 06:37:09.260278  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.260285  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:09.260290  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:09.260348  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:09.285806  407330 cri.go:89] found id: ""
	I1210 06:37:09.285823  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.285830  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:09.285836  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:09.285899  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:09.315817  407330 cri.go:89] found id: ""
	I1210 06:37:09.315832  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.315840  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:09.315845  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:09.315901  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:09.346059  407330 cri.go:89] found id: ""
	I1210 06:37:09.346074  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.346081  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:09.346087  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:09.346144  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:09.381275  407330 cri.go:89] found id: ""
	I1210 06:37:09.381290  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.381297  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:09.381303  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:09.381366  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:09.414891  407330 cri.go:89] found id: ""
	I1210 06:37:09.414905  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.414912  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:09.414918  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:09.414979  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:09.443742  407330 cri.go:89] found id: ""
	I1210 06:37:09.443757  407330 logs.go:282] 0 containers: []
	W1210 06:37:09.443763  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:09.443771  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:09.443781  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:09.510740  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:09.510762  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:09.526338  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:09.526355  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:09.590739  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:09.582122   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.582824   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.584640   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.585274   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.587041   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:09.582122   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.582824   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.584640   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.585274   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:09.587041   14893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:09.590750  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:09.590762  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:09.668271  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:09.668292  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:12.200039  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:12.210520  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:12.210590  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:12.237060  407330 cri.go:89] found id: ""
	I1210 06:37:12.237075  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.237083  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:12.237088  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:12.237160  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:12.263263  407330 cri.go:89] found id: ""
	I1210 06:37:12.263277  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.263284  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:12.263290  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:12.263354  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:12.291756  407330 cri.go:89] found id: ""
	I1210 06:37:12.291772  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.291780  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:12.291785  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:12.291847  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:12.321162  407330 cri.go:89] found id: ""
	I1210 06:37:12.321177  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.321213  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:12.321218  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:12.321279  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:12.347025  407330 cri.go:89] found id: ""
	I1210 06:37:12.347039  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.347054  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:12.347060  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:12.347121  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:12.376035  407330 cri.go:89] found id: ""
	I1210 06:37:12.376050  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.376058  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:12.376064  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:12.376126  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:12.410703  407330 cri.go:89] found id: ""
	I1210 06:37:12.410717  407330 logs.go:282] 0 containers: []
	W1210 06:37:12.410724  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:12.410733  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:12.410744  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:12.486662  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:12.486686  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:12.502236  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:12.502255  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:12.568662  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:12.560235   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.561423   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.562538   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.563321   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.564916   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:12.560235   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.561423   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.562538   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.563321   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:12.564916   14996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:12.568672  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:12.568683  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:12.645878  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:12.645901  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:15.177927  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:15.191193  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:15.191288  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:15.219881  407330 cri.go:89] found id: ""
	I1210 06:37:15.219896  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.219904  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:15.219911  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:15.219971  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:15.247528  407330 cri.go:89] found id: ""
	I1210 06:37:15.247544  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.247551  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:15.247557  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:15.247620  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:15.274888  407330 cri.go:89] found id: ""
	I1210 06:37:15.274903  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.274911  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:15.274920  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:15.274979  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:15.300280  407330 cri.go:89] found id: ""
	I1210 06:37:15.300295  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.300302  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:15.300308  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:15.300369  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:15.325424  407330 cri.go:89] found id: ""
	I1210 06:37:15.325438  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.325445  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:15.325450  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:15.325512  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:15.359467  407330 cri.go:89] found id: ""
	I1210 06:37:15.359482  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.359490  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:15.359495  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:15.359551  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:15.399967  407330 cri.go:89] found id: ""
	I1210 06:37:15.399982  407330 logs.go:282] 0 containers: []
	W1210 06:37:15.399990  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:15.399998  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:15.400019  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:15.477621  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:15.477643  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:15.493123  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:15.493140  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:15.564193  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:15.554932   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.555848   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.557673   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.558388   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.560046   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:15.554932   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.555848   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.557673   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.558388   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:15.560046   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:15.564206  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:15.564216  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:15.640233  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:15.640254  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:18.174394  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:18.186025  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:18.186097  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:18.215781  407330 cri.go:89] found id: ""
	I1210 06:37:18.215795  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.215814  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:18.215819  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:18.215877  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:18.241012  407330 cri.go:89] found id: ""
	I1210 06:37:18.241033  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.241044  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:18.241054  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:18.241155  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:18.270058  407330 cri.go:89] found id: ""
	I1210 06:37:18.270072  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.270079  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:18.270090  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:18.270147  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:18.297554  407330 cri.go:89] found id: ""
	I1210 06:37:18.297576  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.297593  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:18.297603  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:18.297695  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:18.330116  407330 cri.go:89] found id: ""
	I1210 06:37:18.330130  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.330136  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:18.330142  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:18.330217  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:18.360475  407330 cri.go:89] found id: ""
	I1210 06:37:18.360489  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.360496  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:18.360502  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:18.360570  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:18.393014  407330 cri.go:89] found id: ""
	I1210 06:37:18.393028  407330 logs.go:282] 0 containers: []
	W1210 06:37:18.393035  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:18.393043  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:18.393064  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:18.412466  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:18.412484  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:18.485431  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:18.477889   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.478765   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.479651   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.480367   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.481965   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:18.477889   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.478765   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.479651   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.480367   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:18.481965   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:18.485441  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:18.485452  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:18.561043  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:18.561064  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:18.588628  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:18.588644  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:21.156119  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:21.166481  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:21.166541  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:21.191589  407330 cri.go:89] found id: ""
	I1210 06:37:21.191604  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.191611  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:21.191625  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:21.191689  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:21.217715  407330 cri.go:89] found id: ""
	I1210 06:37:21.217730  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.217738  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:21.217744  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:21.217811  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:21.246916  407330 cri.go:89] found id: ""
	I1210 06:37:21.246930  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.246945  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:21.246950  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:21.247005  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:21.271644  407330 cri.go:89] found id: ""
	I1210 06:37:21.271659  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.271666  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:21.271672  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:21.271739  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:21.299971  407330 cri.go:89] found id: ""
	I1210 06:37:21.299985  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.299993  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:21.299998  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:21.300057  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:21.325497  407330 cri.go:89] found id: ""
	I1210 06:37:21.325512  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.325519  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:21.325524  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:21.325583  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:21.351049  407330 cri.go:89] found id: ""
	I1210 06:37:21.351064  407330 logs.go:282] 0 containers: []
	W1210 06:37:21.351071  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:21.351079  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:21.351095  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:21.421855  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:21.421874  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:21.437324  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:21.437341  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:21.499548  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:21.490639   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.491333   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493043   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493634   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.495290   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:21.490639   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.491333   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493043   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.493634   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:21.495290   15313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:21.499604  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:21.499615  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:21.576803  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:21.576824  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:24.110608  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:24.121006  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:24.121068  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:24.146461  407330 cri.go:89] found id: ""
	I1210 06:37:24.146476  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.146483  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:24.146488  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:24.146601  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:24.172866  407330 cri.go:89] found id: ""
	I1210 06:37:24.172882  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.172889  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:24.172894  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:24.172956  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:24.199448  407330 cri.go:89] found id: ""
	I1210 06:37:24.199463  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.199470  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:24.199475  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:24.199535  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:24.229234  407330 cri.go:89] found id: ""
	I1210 06:37:24.229250  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.229257  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:24.229263  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:24.229323  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:24.254311  407330 cri.go:89] found id: ""
	I1210 06:37:24.254326  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.254334  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:24.254339  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:24.254401  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:24.284029  407330 cri.go:89] found id: ""
	I1210 06:37:24.284044  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.284051  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:24.284056  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:24.284131  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:24.309694  407330 cri.go:89] found id: ""
	I1210 06:37:24.309708  407330 logs.go:282] 0 containers: []
	W1210 06:37:24.309715  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:24.309724  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:24.309735  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:24.372553  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:24.363947   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.364695   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.366686   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.367278   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.368967   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:24.363947   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.364695   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.366686   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.367278   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:24.368967   15404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:24.372563  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:24.372575  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:24.464562  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:24.464585  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:24.493762  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:24.493778  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:24.563092  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:24.563113  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:27.078938  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:27.089277  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:27.089338  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:27.114399  407330 cri.go:89] found id: ""
	I1210 06:37:27.114413  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.114421  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:27.114427  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:27.114491  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:27.144680  407330 cri.go:89] found id: ""
	I1210 06:37:27.144695  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.144702  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:27.144707  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:27.144765  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:27.168950  407330 cri.go:89] found id: ""
	I1210 06:37:27.168965  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.168972  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:27.168977  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:27.169034  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:27.196136  407330 cri.go:89] found id: ""
	I1210 06:37:27.196151  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.196159  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:27.196164  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:27.196221  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:27.225403  407330 cri.go:89] found id: ""
	I1210 06:37:27.225418  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.225426  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:27.225432  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:27.225492  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:27.252922  407330 cri.go:89] found id: ""
	I1210 06:37:27.252938  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.252945  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:27.252950  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:27.253009  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:27.278155  407330 cri.go:89] found id: ""
	I1210 06:37:27.278169  407330 logs.go:282] 0 containers: []
	W1210 06:37:27.278177  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:27.278185  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:27.278197  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:27.309557  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:27.309573  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:27.385911  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:27.385939  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:27.404671  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:27.404689  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:27.482019  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:27.473831   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.474734   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476086   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476735   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.478362   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:27.473831   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.474734   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476086   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.476735   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:27.478362   15528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:27.482029  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:27.482040  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:30.059859  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:30.073120  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:30.073221  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:30.104876  407330 cri.go:89] found id: ""
	I1210 06:37:30.104902  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.104910  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:30.104915  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:30.104992  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:30.133968  407330 cri.go:89] found id: ""
	I1210 06:37:30.133984  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.133999  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:30.134007  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:30.134079  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:30.162870  407330 cri.go:89] found id: ""
	I1210 06:37:30.162888  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.162895  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:30.162901  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:30.162965  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:30.190402  407330 cri.go:89] found id: ""
	I1210 06:37:30.190416  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.190424  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:30.190429  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:30.190488  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:30.219884  407330 cri.go:89] found id: ""
	I1210 06:37:30.219913  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.219920  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:30.219926  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:30.219999  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:30.246737  407330 cri.go:89] found id: ""
	I1210 06:37:30.246752  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.246760  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:30.246765  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:30.246825  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:30.273326  407330 cri.go:89] found id: ""
	I1210 06:37:30.273340  407330 logs.go:282] 0 containers: []
	W1210 06:37:30.273348  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:30.273356  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:30.273366  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:30.350646  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:30.350667  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:30.385499  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:30.385515  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:30.461766  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:30.461790  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:30.477421  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:30.477438  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:30.539694  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:30.532297   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.532864   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534312   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534817   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.536259   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:30.532297   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.532864   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534312   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.534817   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:30.536259   15638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:33.041379  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:33.052111  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:33.052178  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:33.080472  407330 cri.go:89] found id: ""
	I1210 06:37:33.080487  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.080494  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:33.080499  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:33.080556  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:33.107304  407330 cri.go:89] found id: ""
	I1210 06:37:33.107319  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.107326  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:33.107331  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:33.107389  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:33.133653  407330 cri.go:89] found id: ""
	I1210 06:37:33.133668  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.133675  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:33.133680  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:33.133740  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:33.159244  407330 cri.go:89] found id: ""
	I1210 06:37:33.159259  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.159266  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:33.159272  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:33.159328  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:33.185378  407330 cri.go:89] found id: ""
	I1210 06:37:33.185393  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.185402  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:33.185407  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:33.185466  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:33.210558  407330 cri.go:89] found id: ""
	I1210 06:37:33.210588  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.210609  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:33.210615  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:33.210672  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:33.235742  407330 cri.go:89] found id: ""
	I1210 06:37:33.235756  407330 logs.go:282] 0 containers: []
	W1210 06:37:33.235773  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:33.235782  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:33.235796  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:33.303992  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:33.304010  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:33.321348  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:33.321367  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:33.396780  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:33.385824   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.386759   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.387788   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.388485   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.390532   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:33.385824   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.386759   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.387788   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.388485   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:33.390532   15724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:33.396789  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:33.396800  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:33.483704  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:33.483727  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:36.014717  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:36.026269  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:36.026331  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:36.054956  407330 cri.go:89] found id: ""
	I1210 06:37:36.054982  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.054989  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:36.054995  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:36.055055  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:36.081454  407330 cri.go:89] found id: ""
	I1210 06:37:36.081470  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.081477  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:36.081483  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:36.081544  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:36.112094  407330 cri.go:89] found id: ""
	I1210 06:37:36.112108  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.112116  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:36.112121  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:36.112181  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:36.138426  407330 cri.go:89] found id: ""
	I1210 06:37:36.138441  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.138448  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:36.138453  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:36.138512  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:36.164608  407330 cri.go:89] found id: ""
	I1210 06:37:36.164623  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.164630  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:36.164637  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:36.164693  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:36.192038  407330 cri.go:89] found id: ""
	I1210 06:37:36.192052  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.192059  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:36.192064  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:36.192124  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:36.221044  407330 cri.go:89] found id: ""
	I1210 06:37:36.221058  407330 logs.go:282] 0 containers: []
	W1210 06:37:36.221065  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:36.221073  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:36.221085  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:36.250907  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:36.250923  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:36.316733  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:36.316753  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:36.332493  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:36.332509  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:36.412829  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:36.401482   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404020   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404535   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.405958   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.407122   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:36.401482   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404020   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.404535   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.405958   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:36.407122   15837 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:36.412843  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:36.412857  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:39.007236  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:39.020585  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:39.020658  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:39.046864  407330 cri.go:89] found id: ""
	I1210 06:37:39.046879  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.046886  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:39.046892  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:39.046954  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:39.076119  407330 cri.go:89] found id: ""
	I1210 06:37:39.076143  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.076152  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:39.076157  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:39.076226  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:39.102655  407330 cri.go:89] found id: ""
	I1210 06:37:39.102671  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.102678  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:39.102684  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:39.102746  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:39.128306  407330 cri.go:89] found id: ""
	I1210 06:37:39.128320  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.128327  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:39.128333  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:39.128407  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:39.156045  407330 cri.go:89] found id: ""
	I1210 06:37:39.156069  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.156076  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:39.156087  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:39.156156  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:39.183781  407330 cri.go:89] found id: ""
	I1210 06:37:39.183796  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.183804  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:39.183809  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:39.183867  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:39.209244  407330 cri.go:89] found id: ""
	I1210 06:37:39.209258  407330 logs.go:282] 0 containers: []
	W1210 06:37:39.209266  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:39.209273  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:39.209294  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:39.274373  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:39.274392  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:39.289765  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:39.289782  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:39.353525  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:39.345986   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.346357   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348003   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348560   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.350004   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:39.345986   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.346357   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348003   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.348560   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:39.350004   15932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:39.353537  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:39.353548  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:39.432803  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:39.432822  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:41.965778  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:41.979117  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:41.979179  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:42.015640  407330 cri.go:89] found id: ""
	I1210 06:37:42.015658  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.015683  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:42.015689  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:42.015759  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:42.048532  407330 cri.go:89] found id: ""
	I1210 06:37:42.048546  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.048553  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:42.048559  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:42.048618  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:42.076982  407330 cri.go:89] found id: ""
	I1210 06:37:42.076998  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.077006  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:42.077012  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:42.077084  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:42.112254  407330 cri.go:89] found id: ""
	I1210 06:37:42.112295  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.112304  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:42.112312  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:42.112393  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:42.150624  407330 cri.go:89] found id: ""
	I1210 06:37:42.150640  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.150647  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:42.150653  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:42.150718  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:42.180813  407330 cri.go:89] found id: ""
	I1210 06:37:42.180845  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.180854  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:42.180860  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:42.180927  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:42.212103  407330 cri.go:89] found id: ""
	I1210 06:37:42.212120  407330 logs.go:282] 0 containers: []
	W1210 06:37:42.212129  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:42.212139  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:42.212151  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:42.228371  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:42.228388  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:42.298333  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:42.290091   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.290977   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.292784   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.293526   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.294529   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:42.290091   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.290977   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.292784   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.293526   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:42.294529   16037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:42.298344  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:42.298363  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:42.375054  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:42.375076  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:42.409015  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:42.409031  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:44.985261  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:44.995937  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:44.995997  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:45.074766  407330 cri.go:89] found id: ""
	I1210 06:37:45.074782  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.074790  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:45.074805  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:45.074874  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:45.130730  407330 cri.go:89] found id: ""
	I1210 06:37:45.130747  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.130755  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:45.130760  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:45.130828  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:45.169030  407330 cri.go:89] found id: ""
	I1210 06:37:45.169058  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.169067  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:45.169073  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:45.169157  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:45.215800  407330 cri.go:89] found id: ""
	I1210 06:37:45.215826  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.215835  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:45.215841  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:45.215915  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:45.274656  407330 cri.go:89] found id: ""
	I1210 06:37:45.274675  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.274684  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:45.274689  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:45.274771  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:45.313260  407330 cri.go:89] found id: ""
	I1210 06:37:45.313277  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.313290  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:45.313296  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:45.313418  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:45.347971  407330 cri.go:89] found id: ""
	I1210 06:37:45.347997  407330 logs.go:282] 0 containers: []
	W1210 06:37:45.348005  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:45.348014  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:45.348028  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:45.381763  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:45.381780  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:45.462459  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:45.462482  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:45.477837  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:45.477854  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:45.547658  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:45.539217   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.540334   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.541688   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.542195   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.543964   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:45.539217   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.540334   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.541688   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.542195   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:45.543964   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:45.547669  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:45.547680  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:48.124454  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:48.134803  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:48.134866  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:48.162481  407330 cri.go:89] found id: ""
	I1210 06:37:48.162498  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.162507  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:48.162512  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:48.162572  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:48.192262  407330 cri.go:89] found id: ""
	I1210 06:37:48.192276  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.192283  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:48.192289  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:48.192350  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:48.220715  407330 cri.go:89] found id: ""
	I1210 06:37:48.220730  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.220737  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:48.220742  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:48.220802  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:48.244954  407330 cri.go:89] found id: ""
	I1210 06:37:48.244968  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.244976  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:48.244981  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:48.245040  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:48.272316  407330 cri.go:89] found id: ""
	I1210 06:37:48.272330  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.272337  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:48.272343  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:48.272399  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:48.300204  407330 cri.go:89] found id: ""
	I1210 06:37:48.300219  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.300226  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:48.300232  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:48.300293  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:48.329747  407330 cri.go:89] found id: ""
	I1210 06:37:48.329762  407330 logs.go:282] 0 containers: []
	W1210 06:37:48.329769  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:48.329777  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:48.329789  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:48.395638  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:48.395658  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:48.411092  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:48.411108  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:48.478819  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:48.470539   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.471330   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.472882   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.473423   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.475010   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:48.470539   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.471330   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.472882   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.473423   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:48.475010   16261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:48.478829  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:48.478841  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:48.556858  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:48.556880  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:51.087332  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:51.097952  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:51.098014  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:51.125310  407330 cri.go:89] found id: ""
	I1210 06:37:51.125325  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.125333  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:51.125345  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:51.125424  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:51.152518  407330 cri.go:89] found id: ""
	I1210 06:37:51.152533  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.152541  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:51.152547  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:51.152619  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:51.181199  407330 cri.go:89] found id: ""
	I1210 06:37:51.181214  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.181222  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:51.181233  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:51.181302  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:51.211368  407330 cri.go:89] found id: ""
	I1210 06:37:51.211382  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.211399  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:51.211405  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:51.211473  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:51.240371  407330 cri.go:89] found id: ""
	I1210 06:37:51.240386  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.240413  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:51.240420  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:51.240493  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:51.266983  407330 cri.go:89] found id: ""
	I1210 06:37:51.266998  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.267005  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:51.267010  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:51.267077  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:51.292392  407330 cri.go:89] found id: ""
	I1210 06:37:51.292417  407330 logs.go:282] 0 containers: []
	W1210 06:37:51.292425  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:51.292433  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:51.292443  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:51.357098  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:51.357119  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:51.372292  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:51.372310  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:51.456874  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:51.448584   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.449513   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451286   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451619   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.453250   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:51.448584   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.449513   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451286   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.451619   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:51.453250   16367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:51.456885  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:51.456896  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:51.532131  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:51.532155  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:54.070226  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:54.081032  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:54.081095  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:54.107855  407330 cri.go:89] found id: ""
	I1210 06:37:54.107871  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.107878  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:54.107884  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:54.107954  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:54.133470  407330 cri.go:89] found id: ""
	I1210 06:37:54.133484  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.133491  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:54.133496  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:54.133556  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:54.160836  407330 cri.go:89] found id: ""
	I1210 06:37:54.160851  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.160859  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:54.160864  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:54.160931  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:54.191664  407330 cri.go:89] found id: ""
	I1210 06:37:54.191679  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.191686  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:54.191692  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:54.191758  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:54.216267  407330 cri.go:89] found id: ""
	I1210 06:37:54.216280  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.216298  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:54.216303  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:54.216370  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:54.241369  407330 cri.go:89] found id: ""
	I1210 06:37:54.241383  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.241390  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:54.241395  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:54.241454  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:54.265711  407330 cri.go:89] found id: ""
	I1210 06:37:54.265725  407330 logs.go:282] 0 containers: []
	W1210 06:37:54.265732  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:54.265740  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:54.265750  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:37:54.280292  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:54.280314  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:54.343110  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:54.335264   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.335904   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337479   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337979   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.339543   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:54.335264   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.335904   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337479   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.337979   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:54.339543   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:54.343120  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:54.343131  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:54.421398  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:54.421417  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:54.457832  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:54.457849  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:57.030320  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:37:57.040862  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:37:57.040923  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:37:57.065817  407330 cri.go:89] found id: ""
	I1210 06:37:57.065832  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.065840  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:37:57.065845  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:37:57.065908  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:37:57.091828  407330 cri.go:89] found id: ""
	I1210 06:37:57.091842  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.091849  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:37:57.091855  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:37:57.091912  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:37:57.117033  407330 cri.go:89] found id: ""
	I1210 06:37:57.117047  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.117054  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:37:57.117060  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:37:57.117128  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:37:57.143007  407330 cri.go:89] found id: ""
	I1210 06:37:57.143021  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.143028  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:37:57.143034  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:37:57.143090  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:37:57.171364  407330 cri.go:89] found id: ""
	I1210 06:37:57.171379  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.171386  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:37:57.171391  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:37:57.171451  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:37:57.195695  407330 cri.go:89] found id: ""
	I1210 06:37:57.195723  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.195730  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:37:57.195736  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:37:57.195802  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:37:57.225018  407330 cri.go:89] found id: ""
	I1210 06:37:57.225033  407330 logs.go:282] 0 containers: []
	W1210 06:37:57.225040  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:37:57.225049  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:37:57.225060  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:37:57.299878  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:37:57.291518   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.292410   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294182   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294649   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.296294   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:37:57.291518   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.292410   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294182   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.294649   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:37:57.296294   16563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:37:57.299889  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:37:57.299899  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:37:57.377757  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:37:57.377778  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:37:57.420515  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:37:57.420531  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:37:57.493246  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:37:57.493267  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:00.010113  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:00.082560  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:00.082643  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:00.187405  407330 cri.go:89] found id: ""
	I1210 06:38:00.190377  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.190403  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:00.190413  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:00.190506  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:00.256368  407330 cri.go:89] found id: ""
	I1210 06:38:00.256395  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.256405  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:00.256411  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:00.256498  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:00.309570  407330 cri.go:89] found id: ""
	I1210 06:38:00.309587  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.309595  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:00.309602  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:00.309691  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:00.359167  407330 cri.go:89] found id: ""
	I1210 06:38:00.359184  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.359193  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:00.359199  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:00.359284  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:00.401533  407330 cri.go:89] found id: ""
	I1210 06:38:00.401549  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.401557  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:00.401562  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:00.401629  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:00.439769  407330 cri.go:89] found id: ""
	I1210 06:38:00.439784  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.439792  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:00.439797  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:00.439863  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:00.471369  407330 cri.go:89] found id: ""
	I1210 06:38:00.471384  407330 logs.go:282] 0 containers: []
	W1210 06:38:00.471392  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:00.471400  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:00.471412  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:00.504494  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:00.504511  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:00.570722  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:00.570742  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:00.585662  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:00.585679  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:00.648503  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:00.640817   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.641687   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643282   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643584   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.645048   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:00.640817   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.641687   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643282   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.643584   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:00.645048   16689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:00.648513  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:00.648524  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:03.225660  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:03.235918  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:03.235979  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:03.260969  407330 cri.go:89] found id: ""
	I1210 06:38:03.260984  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.260991  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:03.260996  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:03.261058  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:03.286700  407330 cri.go:89] found id: ""
	I1210 06:38:03.286714  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.286721  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:03.286726  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:03.286785  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:03.315672  407330 cri.go:89] found id: ""
	I1210 06:38:03.315686  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.315694  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:03.315699  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:03.315757  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:03.344486  407330 cri.go:89] found id: ""
	I1210 06:38:03.344501  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.344508  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:03.344517  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:03.344576  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:03.371038  407330 cri.go:89] found id: ""
	I1210 06:38:03.371052  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.371059  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:03.371064  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:03.371127  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:03.404397  407330 cri.go:89] found id: ""
	I1210 06:38:03.404412  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.404420  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:03.404425  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:03.404492  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:03.440935  407330 cri.go:89] found id: ""
	I1210 06:38:03.440949  407330 logs.go:282] 0 containers: []
	W1210 06:38:03.440957  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:03.440965  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:03.440975  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:03.509589  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:03.509610  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:03.525492  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:03.525509  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:03.592907  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:03.584405   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.585238   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587039   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587639   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.589379   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:03.584405   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.585238   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587039   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.587639   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:03.589379   16780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:03.592926  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:03.592938  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:03.669095  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:03.669114  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:06.198833  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:06.209381  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:06.209457  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:06.234410  407330 cri.go:89] found id: ""
	I1210 06:38:06.234424  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.234431  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:06.234437  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:06.234495  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:06.264001  407330 cri.go:89] found id: ""
	I1210 06:38:06.264016  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.264022  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:06.264028  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:06.264087  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:06.289353  407330 cri.go:89] found id: ""
	I1210 06:38:06.289367  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.289375  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:06.289380  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:06.289442  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:06.318627  407330 cri.go:89] found id: ""
	I1210 06:38:06.318643  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.318651  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:06.318656  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:06.318715  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:06.344169  407330 cri.go:89] found id: ""
	I1210 06:38:06.344183  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.344191  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:06.344196  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:06.344255  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:06.372255  407330 cri.go:89] found id: ""
	I1210 06:38:06.372270  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.372277  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:06.372283  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:06.372346  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:06.410561  407330 cri.go:89] found id: ""
	I1210 06:38:06.410575  407330 logs.go:282] 0 containers: []
	W1210 06:38:06.410582  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:06.410590  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:06.410601  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:06.485685  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:06.485706  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:06.500886  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:06.500904  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:06.569054  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:06.561431   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.562119   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.563630   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.564134   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.565584   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:06.561431   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.562119   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.563630   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.564134   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:06.565584   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:06.569065  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:06.569078  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:06.650735  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:06.650760  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:09.182920  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:09.193744  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:09.193805  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:09.224238  407330 cri.go:89] found id: ""
	I1210 06:38:09.224253  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.224260  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:09.224265  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:09.224321  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:09.249812  407330 cri.go:89] found id: ""
	I1210 06:38:09.249827  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.249835  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:09.249840  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:09.249900  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:09.275012  407330 cri.go:89] found id: ""
	I1210 06:38:09.275025  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.275032  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:09.275037  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:09.275094  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:09.299472  407330 cri.go:89] found id: ""
	I1210 06:38:09.299500  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.299508  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:09.299513  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:09.299579  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:09.325485  407330 cri.go:89] found id: ""
	I1210 06:38:09.325499  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.325507  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:09.325512  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:09.325567  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:09.350568  407330 cri.go:89] found id: ""
	I1210 06:38:09.350582  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.350589  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:09.350594  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:09.350657  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:09.380510  407330 cri.go:89] found id: ""
	I1210 06:38:09.380524  407330 logs.go:282] 0 containers: []
	W1210 06:38:09.380531  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:09.380548  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:09.380560  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:09.421824  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:09.421840  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:09.497738  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:09.497764  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:09.513692  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:09.513711  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:09.581478  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:09.573930   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.574589   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576111   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576487   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.577997   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:09.573930   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.574589   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576111   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.576487   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:09.577997   16997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:09.581497  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:09.581507  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:12.158761  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:12.169119  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:12.169177  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:12.194655  407330 cri.go:89] found id: ""
	I1210 06:38:12.194670  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.194677  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:12.194683  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:12.194739  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:12.223200  407330 cri.go:89] found id: ""
	I1210 06:38:12.223216  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.223223  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:12.223228  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:12.223293  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:12.249017  407330 cri.go:89] found id: ""
	I1210 06:38:12.249032  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.249043  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:12.249049  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:12.249110  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:12.274392  407330 cri.go:89] found id: ""
	I1210 06:38:12.274407  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.274414  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:12.274420  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:12.274477  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:12.299224  407330 cri.go:89] found id: ""
	I1210 06:38:12.299238  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.299245  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:12.299250  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:12.299310  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:12.324356  407330 cri.go:89] found id: ""
	I1210 06:38:12.324370  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.324377  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:12.324383  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:12.324441  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:12.355846  407330 cri.go:89] found id: ""
	I1210 06:38:12.355876  407330 logs.go:282] 0 containers: []
	W1210 06:38:12.355883  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:12.355892  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:12.355903  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:12.426588  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:12.426608  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:12.446044  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:12.446061  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:12.519015  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:12.508422   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.508965   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513107   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513691   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.515195   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:12.508422   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.508965   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513107   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.513691   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:12.515195   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:12.519025  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:12.519036  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:12.595463  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:12.595494  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:15.126222  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:15.136973  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:15.137050  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:15.168527  407330 cri.go:89] found id: ""
	I1210 06:38:15.168542  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.168549  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:15.168554  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:15.168615  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:15.195472  407330 cri.go:89] found id: ""
	I1210 06:38:15.195488  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.195496  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:15.195501  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:15.195560  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:15.222272  407330 cri.go:89] found id: ""
	I1210 06:38:15.222286  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.222293  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:15.222298  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:15.222359  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:15.252445  407330 cri.go:89] found id: ""
	I1210 06:38:15.252460  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.252473  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:15.252479  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:15.252541  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:15.279037  407330 cri.go:89] found id: ""
	I1210 06:38:15.279056  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.279063  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:15.279069  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:15.279130  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:15.304272  407330 cri.go:89] found id: ""
	I1210 06:38:15.304287  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.304294  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:15.304299  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:15.304358  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:15.329937  407330 cri.go:89] found id: ""
	I1210 06:38:15.329951  407330 logs.go:282] 0 containers: []
	W1210 06:38:15.329958  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:15.329965  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:15.329976  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:15.344908  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:15.344927  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:15.430525  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:15.420038   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.420859   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.422803   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.424594   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.426170   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:15.420038   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.420859   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.422803   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.424594   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:15.426170   17189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:15.430538  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:15.430549  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:15.506380  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:15.506403  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:15.535708  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:15.535725  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:18.102529  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:18.114363  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:18.114433  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:18.140986  407330 cri.go:89] found id: ""
	I1210 06:38:18.141000  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.141007  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:18.141012  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:18.141070  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:18.167798  407330 cri.go:89] found id: ""
	I1210 06:38:18.167812  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.167819  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:18.167827  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:18.167883  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:18.194514  407330 cri.go:89] found id: ""
	I1210 06:38:18.194539  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.194547  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:18.194553  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:18.194614  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:18.219929  407330 cri.go:89] found id: ""
	I1210 06:38:18.219943  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.219949  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:18.219955  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:18.220013  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:18.247728  407330 cri.go:89] found id: ""
	I1210 06:38:18.247742  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.247749  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:18.247755  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:18.247814  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:18.274948  407330 cri.go:89] found id: ""
	I1210 06:38:18.274963  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.274971  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:18.274976  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:18.275034  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:18.301159  407330 cri.go:89] found id: ""
	I1210 06:38:18.301173  407330 logs.go:282] 0 containers: []
	W1210 06:38:18.301196  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:18.301204  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:18.301222  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:18.337936  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:18.337955  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:18.404135  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:18.404153  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:18.420644  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:18.420661  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:18.488180  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:18.479576   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.480035   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.481748   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.482513   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.484281   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:18.479576   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.480035   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.481748   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.482513   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:18.484281   17312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:18.488199  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:18.488210  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:21.064064  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:21.074224  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:21.074283  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:21.100332  407330 cri.go:89] found id: ""
	I1210 06:38:21.100347  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.100354  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:21.100359  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:21.100416  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:21.128496  407330 cri.go:89] found id: ""
	I1210 06:38:21.128511  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.128518  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:21.128523  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:21.128583  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:21.165661  407330 cri.go:89] found id: ""
	I1210 06:38:21.165675  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.165682  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:21.165687  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:21.165745  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:21.191177  407330 cri.go:89] found id: ""
	I1210 06:38:21.191191  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.191199  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:21.191204  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:21.191262  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:21.217247  407330 cri.go:89] found id: ""
	I1210 06:38:21.217263  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.217270  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:21.217275  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:21.217336  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:21.243649  407330 cri.go:89] found id: ""
	I1210 06:38:21.243663  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.243670  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:21.243675  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:21.243731  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:21.272574  407330 cri.go:89] found id: ""
	I1210 06:38:21.272589  407330 logs.go:282] 0 containers: []
	W1210 06:38:21.272596  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:21.272604  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:21.272615  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:21.336563  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:21.328507   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.329001   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.330691   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.331320   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.332859   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:21.328507   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.329001   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.330691   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.331320   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:21.332859   17392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:21.336573  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:21.336583  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:21.419141  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:21.419163  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:21.452486  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:21.452504  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:21.518913  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:21.518934  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:24.035407  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:24.051364  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:24.051491  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:24.079890  407330 cri.go:89] found id: ""
	I1210 06:38:24.079905  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.079913  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:24.079918  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:24.079976  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:24.108058  407330 cri.go:89] found id: ""
	I1210 06:38:24.108072  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.108089  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:24.108094  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:24.108160  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:24.136304  407330 cri.go:89] found id: ""
	I1210 06:38:24.136318  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.136325  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:24.136331  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:24.136388  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:24.166784  407330 cri.go:89] found id: ""
	I1210 06:38:24.166805  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.166813  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:24.166819  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:24.166879  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:24.194254  407330 cri.go:89] found id: ""
	I1210 06:38:24.194270  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.194278  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:24.194283  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:24.194349  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:24.220032  407330 cri.go:89] found id: ""
	I1210 06:38:24.220046  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.220053  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:24.220058  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:24.220125  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:24.249252  407330 cri.go:89] found id: ""
	I1210 06:38:24.249267  407330 logs.go:282] 0 containers: []
	W1210 06:38:24.249275  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:24.249282  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:24.249301  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:24.332782  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:24.332809  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:24.363293  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:24.363313  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:24.439310  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:24.439334  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:24.454866  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:24.454883  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:24.518646  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:24.510636   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.511199   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.512759   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.513269   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.514934   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:24.510636   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.511199   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.512759   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.513269   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:24.514934   17524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:27.018916  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:27.029680  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:27.029748  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:27.057853  407330 cri.go:89] found id: ""
	I1210 06:38:27.057868  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.057876  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:27.057881  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:27.057943  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:27.088489  407330 cri.go:89] found id: ""
	I1210 06:38:27.088504  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.088512  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:27.088517  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:27.088576  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:27.114135  407330 cri.go:89] found id: ""
	I1210 06:38:27.114150  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.114158  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:27.114163  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:27.114222  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:27.144417  407330 cri.go:89] found id: ""
	I1210 06:38:27.144431  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.144438  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:27.144443  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:27.144502  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:27.170599  407330 cri.go:89] found id: ""
	I1210 06:38:27.170613  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.170621  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:27.170626  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:27.170704  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:27.196493  407330 cri.go:89] found id: ""
	I1210 06:38:27.196508  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.196516  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:27.196521  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:27.196577  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:27.222440  407330 cri.go:89] found id: ""
	I1210 06:38:27.222455  407330 logs.go:282] 0 containers: []
	W1210 06:38:27.222462  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:27.222469  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:27.222480  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:27.288558  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:27.288578  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:27.304274  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:27.304290  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:27.370398  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:27.361823   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.362522   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364129   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364518   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.366357   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:27.361823   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.362522   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364129   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.364518   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:27.366357   17608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:27.370408  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:27.370419  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:27.458800  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:27.458821  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:29.988954  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:29.999798  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:29.999864  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:30.095338  407330 cri.go:89] found id: ""
	I1210 06:38:30.095356  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.095364  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:30.095370  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:30.095440  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:30.129132  407330 cri.go:89] found id: ""
	I1210 06:38:30.129148  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.129156  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:30.129162  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:30.129271  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:30.157101  407330 cri.go:89] found id: ""
	I1210 06:38:30.157117  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.157124  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:30.157130  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:30.157224  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:30.184791  407330 cri.go:89] found id: ""
	I1210 06:38:30.184806  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.184814  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:30.184819  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:30.184885  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:30.211932  407330 cri.go:89] found id: ""
	I1210 06:38:30.211958  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.211966  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:30.211971  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:30.212041  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:30.238373  407330 cri.go:89] found id: ""
	I1210 06:38:30.238398  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.238407  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:30.238413  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:30.238479  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:30.266144  407330 cri.go:89] found id: ""
	I1210 06:38:30.266159  407330 logs.go:282] 0 containers: []
	W1210 06:38:30.266167  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:30.266176  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:30.266187  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:30.337549  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:30.337570  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:30.353715  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:30.353731  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:30.430797  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:30.422887   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.423661   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425295   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425615   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.427098   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:30.422887   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.423661   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425295   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.425615   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:30.427098   17712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:30.430808  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:30.430821  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:30.510900  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:30.510921  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:33.040458  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:33.051069  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:33.051132  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:33.081117  407330 cri.go:89] found id: ""
	I1210 06:38:33.081131  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.081138  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:33.081144  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:33.081232  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:33.110972  407330 cri.go:89] found id: ""
	I1210 06:38:33.110986  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.110993  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:33.110998  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:33.111055  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:33.136083  407330 cri.go:89] found id: ""
	I1210 06:38:33.136098  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.136104  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:33.136110  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:33.136170  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:33.162539  407330 cri.go:89] found id: ""
	I1210 06:38:33.162554  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.162561  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:33.162567  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:33.162628  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:33.192025  407330 cri.go:89] found id: ""
	I1210 06:38:33.192039  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.192047  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:33.192053  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:33.192114  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:33.217529  407330 cri.go:89] found id: ""
	I1210 06:38:33.217544  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.217562  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:33.217568  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:33.217637  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:33.242901  407330 cri.go:89] found id: ""
	I1210 06:38:33.242916  407330 logs.go:282] 0 containers: []
	W1210 06:38:33.242923  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:33.242931  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:33.242942  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:33.311877  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:33.311897  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:33.327423  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:33.327438  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:33.395423  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:33.386462   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.387346   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.388905   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.389556   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.391613   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:33.386462   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.387346   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.388905   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.389556   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:33.391613   17819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:33.395434  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:33.395444  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:33.477529  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:33.477551  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:36.008120  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:36.021683  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:38:36.021745  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:38:36.049460  407330 cri.go:89] found id: ""
	I1210 06:38:36.049475  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.049482  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:38:36.049487  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:38:36.049560  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:38:36.076929  407330 cri.go:89] found id: ""
	I1210 06:38:36.076944  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.076951  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:38:36.076956  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:38:36.077017  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:38:36.103193  407330 cri.go:89] found id: ""
	I1210 06:38:36.103208  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.103214  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:38:36.103219  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:38:36.103285  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:38:36.129995  407330 cri.go:89] found id: ""
	I1210 06:38:36.130009  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.130024  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:38:36.130029  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:38:36.130087  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:38:36.156753  407330 cri.go:89] found id: ""
	I1210 06:38:36.156781  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.156789  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:38:36.156794  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:38:36.156857  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:38:36.188439  407330 cri.go:89] found id: ""
	I1210 06:38:36.188453  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.188461  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:38:36.188466  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:38:36.188525  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:38:36.214278  407330 cri.go:89] found id: ""
	I1210 06:38:36.214293  407330 logs.go:282] 0 containers: []
	W1210 06:38:36.214300  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:38:36.214309  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:38:36.214321  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:38:36.280730  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:38:36.280750  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:38:36.296203  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:38:36.296220  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:38:36.364197  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:38:36.355593   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.356352   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358159   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358872   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.360561   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:38:36.355593   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.356352   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358159   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.358872   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:38:36.360561   17921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:38:36.364209  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:38:36.364222  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:38:36.458076  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:38:36.458097  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:38:38.987911  407330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:38.998557  407330 kubeadm.go:602] duration metric: took 4m3.870918207s to restartPrimaryControlPlane
	W1210 06:38:38.998620  407330 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 06:38:38.998704  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 06:38:39.409934  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:38:39.423184  407330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:38:39.431304  407330 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:38:39.431358  407330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:38:39.439341  407330 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:38:39.439350  407330 kubeadm.go:158] found existing configuration files:
	
	I1210 06:38:39.439401  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:38:39.447538  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:38:39.447592  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:38:39.454886  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:38:39.462719  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:38:39.462778  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:38:39.470357  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:38:39.477894  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:38:39.477950  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:38:39.485341  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:38:39.493235  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:38:39.493292  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:38:39.500743  407330 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:38:39.538320  407330 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:38:39.538555  407330 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:38:39.610131  407330 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:38:39.610196  407330 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:38:39.610230  407330 kubeadm.go:319] OS: Linux
	I1210 06:38:39.610281  407330 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:38:39.610328  407330 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:38:39.610374  407330 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:38:39.610421  407330 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:38:39.610468  407330 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:38:39.610517  407330 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:38:39.610561  407330 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:38:39.610608  407330 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:38:39.610653  407330 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:38:39.676087  407330 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:38:39.676189  407330 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:38:39.676279  407330 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:38:39.683789  407330 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:38:39.689387  407330 out.go:252]   - Generating certificates and keys ...
	I1210 06:38:39.689490  407330 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:38:39.689554  407330 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:38:39.689629  407330 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:38:39.689689  407330 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:38:39.689759  407330 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:38:39.689811  407330 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:38:39.689904  407330 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:38:39.689978  407330 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:38:39.690060  407330 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:38:39.690139  407330 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:38:39.690176  407330 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:38:39.690241  407330 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:38:40.131783  407330 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:38:40.503719  407330 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:38:40.658362  407330 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:38:41.256208  407330 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:38:41.407412  407330 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:38:41.408125  407330 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:38:41.410853  407330 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:38:41.414436  407330 out.go:252]   - Booting up control plane ...
	I1210 06:38:41.414546  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:38:41.414623  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:38:41.414696  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:38:41.431657  407330 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:38:41.431964  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:38:41.440211  407330 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:38:41.440329  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:38:41.440568  407330 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:38:41.565122  407330 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:38:41.565287  407330 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:42:41.565436  407330 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000253721s
	I1210 06:42:41.565465  407330 kubeadm.go:319] 
	I1210 06:42:41.565522  407330 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:42:41.565554  407330 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:42:41.565658  407330 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:42:41.565663  407330 kubeadm.go:319] 
	I1210 06:42:41.565766  407330 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:42:41.565797  407330 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:42:41.565827  407330 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:42:41.565830  407330 kubeadm.go:319] 
	I1210 06:42:41.570718  407330 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:42:41.571209  407330 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:42:41.571330  407330 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:42:41.571595  407330 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:42:41.571607  407330 kubeadm.go:319] 
	I1210 06:42:41.571752  407330 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:42:41.571857  407330 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000253721s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:42:41.571950  407330 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 06:42:41.983114  407330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:42:41.996619  407330 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:42:41.996677  407330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:42:42.015710  407330 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:42:42.015721  407330 kubeadm.go:158] found existing configuration files:
	
	I1210 06:42:42.015783  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:42:42.031380  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:42:42.031448  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:42:42.040300  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:42:42.049113  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:42:42.049177  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:42:42.057272  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:42:42.066509  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:42:42.066573  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:42:42.076663  407330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:42:42.086749  407330 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:42:42.086829  407330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:42:42.096582  407330 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:42:42.144385  407330 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:42:42.144469  407330 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:42:42.248727  407330 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:42:42.248801  407330 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:42:42.248835  407330 kubeadm.go:319] OS: Linux
	I1210 06:42:42.248888  407330 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:42:42.248946  407330 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:42:42.249004  407330 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:42:42.249052  407330 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:42:42.249117  407330 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:42:42.249198  407330 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:42:42.249245  407330 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:42:42.249306  407330 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:42:42.249359  407330 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:42:42.316721  407330 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:42:42.316825  407330 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:42:42.316916  407330 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:42:42.325666  407330 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:42:42.330985  407330 out.go:252]   - Generating certificates and keys ...
	I1210 06:42:42.331095  407330 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:42:42.331182  407330 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:42:42.331258  407330 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:42:42.331331  407330 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:42:42.331424  407330 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:42:42.331487  407330 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:42:42.331560  407330 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:42:42.331637  407330 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:42:42.331721  407330 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:42:42.331801  407330 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:42:42.331847  407330 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:42:42.331912  407330 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:42:42.541750  407330 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:42:43.048349  407330 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:42:43.167759  407330 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:42:43.323314  407330 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:42:43.407090  407330 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:42:43.408333  407330 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:42:43.412234  407330 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:42:43.415621  407330 out.go:252]   - Booting up control plane ...
	I1210 06:42:43.415734  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:42:43.415811  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:42:43.416436  407330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:42:43.431439  407330 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:42:43.431813  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:42:43.438586  407330 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:42:43.438900  407330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:42:43.438951  407330 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:42:43.563199  407330 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:42:43.563333  407330 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:46:43.563419  407330 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000308988s
	I1210 06:46:43.563446  407330 kubeadm.go:319] 
	I1210 06:46:43.563502  407330 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:46:43.563534  407330 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:46:43.563637  407330 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:46:43.563641  407330 kubeadm.go:319] 
	I1210 06:46:43.563744  407330 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:46:43.563775  407330 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:46:43.563804  407330 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:46:43.563807  407330 kubeadm.go:319] 
	I1210 06:46:43.567965  407330 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:46:43.568389  407330 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:46:43.568496  407330 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:46:43.568730  407330 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:46:43.568734  407330 kubeadm.go:319] 
	I1210 06:46:43.568801  407330 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:46:43.568851  407330 kubeadm.go:403] duration metric: took 12m8.481939807s to StartCluster
	I1210 06:46:43.568881  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:46:43.568941  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:46:43.595798  407330 cri.go:89] found id: ""
	I1210 06:46:43.595831  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.595854  407330 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:46:43.595860  407330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:46:43.595925  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:46:43.621092  407330 cri.go:89] found id: ""
	I1210 06:46:43.621107  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.621114  407330 logs.go:284] No container was found matching "etcd"
	I1210 06:46:43.621123  407330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:46:43.621181  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:46:43.646506  407330 cri.go:89] found id: ""
	I1210 06:46:43.646520  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.646528  407330 logs.go:284] No container was found matching "coredns"
	I1210 06:46:43.646533  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:46:43.646593  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:46:43.671975  407330 cri.go:89] found id: ""
	I1210 06:46:43.671990  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.671997  407330 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:46:43.672003  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:46:43.672059  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:46:43.698910  407330 cri.go:89] found id: ""
	I1210 06:46:43.698925  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.698932  407330 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:46:43.698937  407330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:46:43.698997  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:46:43.727644  407330 cri.go:89] found id: ""
	I1210 06:46:43.727660  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.727667  407330 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:46:43.727672  407330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:46:43.727732  407330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:46:43.752849  407330 cri.go:89] found id: ""
	I1210 06:46:43.752864  407330 logs.go:282] 0 containers: []
	W1210 06:46:43.752871  407330 logs.go:284] No container was found matching "kindnet"
	I1210 06:46:43.752879  407330 logs.go:123] Gathering logs for kubelet ...
	I1210 06:46:43.752889  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:46:43.818161  407330 logs.go:123] Gathering logs for dmesg ...
	I1210 06:46:43.818181  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:46:43.833400  407330 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:46:43.833417  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:46:43.902591  407330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:46:43.894870   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.895546   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897128   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897673   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.899158   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:46:43.894870   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.895546   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897128   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.897673   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:46:43.899158   21721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:46:43.902602  407330 logs.go:123] Gathering logs for CRI-O ...
	I1210 06:46:43.902614  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 06:46:43.975424  407330 logs.go:123] Gathering logs for container status ...
	I1210 06:46:43.975445  407330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 06:46:44.022327  407330 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:46:44.022377  407330 out.go:285] * 
	W1210 06:46:44.022442  407330 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:46:44.022452  407330 out.go:285] * 
	W1210 06:46:44.024584  407330 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:46:44.031496  407330 out.go:203] 
	W1210 06:46:44.034389  407330 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:46:44.034453  407330 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:46:44.034475  407330 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:46:44.037811  407330 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914305234Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914347581Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914410941Z" level=info msg="Create NRI interface"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914519907Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914528243Z" level=info msg="runtime interface created"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914540707Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914547246Z" level=info msg="runtime interface starting up..."
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914553523Z" level=info msg="starting plugins..."
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914566389Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:34:32 functional-253997 crio[10563]: time="2025-12-10T06:34:32.914635518Z" level=info msg="No systemd watchdog enabled"
	Dec 10 06:34:32 functional-253997 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.679749304Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=256aed1f-deb7-4ef3-85cd-131eefce5f31 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.680508073Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=d66c85ac-bdac-47c8-b0cb-0b9c6495c2c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.681012677Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=9d08e49c-548c-44b3-98b1-7f3a5851a031 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.681572306Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=0bc6e3be-4b4d-4362-bc99-b8372d06365e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.681969496Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2f86c405-f63c-4d07-a2ec-618b9449eabe name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.682410707Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f71d0106-3216-4008-9111-b1a84be0126f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:38:39 functional-253997 crio[10563]: time="2025-12-10T06:38:39.682849883Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=c187c18f-0638-4353-a242-3d51d64c2a33 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.320018913Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=2592a99c-52fd-46d2-9cab-44207ad0e0c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.321028729Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=f6ff0057-6b99-4df5-9aca-bca9adc94791 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.321868157Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=9722b5ff-a4b3-445c-b3c4-2f4e55341b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.322453627Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=b61d672b-0949-4316-8a7f-4071c86b03d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.322949111Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=450c82e8-dfb9-4142-8b40-08c4b80c0ef4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.32354821Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=0920e0ed-8fd6-46c5-8404-88b74f388f67 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.324043932Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=51273fc0-dd5c-4357-b908-13dc96e1efa4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:48:40.696642   23225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:48:40.697389   23225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:48:40.699123   23225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:48:40.699761   23225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:48:40.701371   23225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 03:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015165] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514331] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032961] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807794] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.310854] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 04:51] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000001f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000bd7dad86{9P.session} n=00000000db0bc116
	[  +0.001142] FS-Cache: O-key=[10] '34323936323939323530'
	[  +0.000773] FS-Cache: N-cookie c=00000020 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000bd7dad86{9P.session} n=00000000040fb2e3
	[  +0.001079] FS-Cache: N-key=[10] '34323936323939323530'
	[Dec10 04:56] hrtimer: interrupt took 8286122 ns
	[Dec10 06:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 06:10] overlayfs: idmapped layers are currently not supported
	[ +21.611451] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 06:15] overlayfs: idmapped layers are currently not supported
	[Dec10 06:16] overlayfs: idmapped layers are currently not supported
	[Dec10 06:19] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:48:40 up  3:31,  0 user,  load average: 0.20, 0.18, 0.43
	Linux functional-253997 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:48:38 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:48:38 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 793.
	Dec 10 06:48:38 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:38 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:38 functional-253997 kubelet[23116]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:38 functional-253997 kubelet[23116]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:38 functional-253997 kubelet[23116]: E1210 06:48:38.939747   23116 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:48:38 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:48:38 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:48:39 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 794.
	Dec 10 06:48:39 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:39 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:39 functional-253997 kubelet[23129]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:39 functional-253997 kubelet[23129]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:39 functional-253997 kubelet[23129]: E1210 06:48:39.694458   23129 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:48:39 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:48:39 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:48:40 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 795.
	Dec 10 06:48:40 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:40 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:48:40 functional-253997 kubelet[23155]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:40 functional-253997 kubelet[23155]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:48:40 functional-253997 kubelet[23155]: E1210 06:48:40.436463   23155 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:48:40 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:48:40 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997: exit status 2 (329.799872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-253997" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (241.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1210 06:46:58.799502  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:47:02.433519  364265 retry.go:31] will retry after 1.539983199s: Temporary Error: Get "http://10.97.8.47": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:47:13.974113  364265 retry.go:31] will retry after 6.466116036s: Temporary Error: Get "http://10.97.8.47": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:47:30.440614  364265 retry.go:31] will retry after 3.515999527s: Temporary Error: Get "http://10.97.8.47": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:47:43.957815  364265 retry.go:31] will retry after 12.259133346s: Temporary Error: Get "http://10.97.8.47": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:48:06.218378  364265 retry.go:31] will retry after 22.649819932s: Temporary Error: Get "http://10.97.8.47": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1210 06:48:38.175205  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1210 06:50:01.912800  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997: exit status 2 (331.1986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-253997" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-253997
helpers_test.go:244: (dbg) docker inspect functional-253997:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	        "Created": "2025-12-10T06:19:33.832297734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 395175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:19:33.914699563Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hosts",
	        "LogPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7-json.log",
	        "Name": "/functional-253997",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-253997:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-253997",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	                "LowerDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-253997",
	                "Source": "/var/lib/docker/volumes/functional-253997/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-253997",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-253997",
	                "name.minikube.sigs.k8s.io": "functional-253997",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "484c42edbd556e17e2c5a836571d3275bc9cbd157b70c100e08a5b73322a62e6",
	            "SandboxKey": "/var/run/docker/netns/484c42edbd55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-253997": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ba:52:c5:6e:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fc5aadc020d5981cfdab05726de722ae81d653cbc30b66014568069657370fe7",
	                    "EndpointID": "9ecd4882ea5c9fe5581b68ad15a0a2f450e34777aee426fcc158008c81076e5a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-253997",
	                        "256e059cdf1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997: exit status 2 (307.077038ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-253997 image load --daemon kicbase/echo-server:functional-253997 --alsologtostderr                                                             │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image          │ functional-253997 image ls                                                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image          │ functional-253997 image save kicbase/echo-server:functional-253997 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image          │ functional-253997 image rm kicbase/echo-server:functional-253997 --alsologtostderr                                                                        │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image          │ functional-253997 image ls                                                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image          │ functional-253997 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image          │ functional-253997 image ls                                                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image          │ functional-253997 image save --daemon kicbase/echo-server:functional-253997 --alsologtostderr                                                             │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ ssh            │ functional-253997 ssh sudo cat /etc/ssl/certs/364265.pem                                                                                                  │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ ssh            │ functional-253997 ssh sudo cat /usr/share/ca-certificates/364265.pem                                                                                      │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ ssh            │ functional-253997 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ ssh            │ functional-253997 ssh sudo cat /etc/ssl/certs/3642652.pem                                                                                                 │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ ssh            │ functional-253997 ssh sudo cat /usr/share/ca-certificates/3642652.pem                                                                                     │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ ssh            │ functional-253997 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ ssh            │ functional-253997 ssh sudo cat /etc/test/nested/copy/364265/hosts                                                                                         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image          │ functional-253997 image ls --format short --alsologtostderr                                                                                               │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image          │ functional-253997 image ls --format yaml --alsologtostderr                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ ssh            │ functional-253997 ssh pgrep buildkitd                                                                                                                     │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │                     │
	│ image          │ functional-253997 image build -t localhost/my-image:functional-253997 testdata/build --alsologtostderr                                                    │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image          │ functional-253997 image ls                                                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image          │ functional-253997 image ls --format json --alsologtostderr                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image          │ functional-253997 image ls --format table --alsologtostderr                                                                                               │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ update-context │ functional-253997 update-context --alsologtostderr -v=2                                                                                                   │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ update-context │ functional-253997 update-context --alsologtostderr -v=2                                                                                                   │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ update-context │ functional-253997 update-context --alsologtostderr -v=2                                                                                                   │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:48:56
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:48:56.685450  424691 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:48:56.685567  424691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:48:56.685574  424691 out.go:374] Setting ErrFile to fd 2...
	I1210 06:48:56.685579  424691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:48:56.686200  424691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:48:56.686646  424691 out.go:368] Setting JSON to false
	I1210 06:48:56.687478  424691 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12689,"bootTime":1765336648,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:48:56.687545  424691 start.go:143] virtualization:  
	I1210 06:48:56.690677  424691 out.go:179] * [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:48:56.694538  424691 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:48:56.694730  424691 notify.go:221] Checking for updates...
	I1210 06:48:56.700184  424691 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:48:56.703110  424691 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:48:56.706066  424691 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:48:56.709290  424691 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:48:56.712264  424691 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:48:56.715707  424691 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:48:56.716347  424691 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:48:56.743289  424691 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:48:56.743441  424691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:48:56.803116  424691 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:48:56.793157147 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:48:56.803230  424691 docker.go:319] overlay module found
	I1210 06:48:56.806465  424691 out.go:179] * Using the docker driver based on existing profile
	I1210 06:48:56.809442  424691 start.go:309] selected driver: docker
	I1210 06:48:56.809468  424691 start.go:927] validating driver "docker" against &{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:48:56.809571  424691 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:48:56.809682  424691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:48:56.863715  424691 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:48:56.854522382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:48:56.864156  424691 cni.go:84] Creating CNI manager for ""
	I1210 06:48:56.864220  424691 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:48:56.864260  424691 start.go:353] cluster config:
	{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:48:56.867300  424691 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.320018913Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=2592a99c-52fd-46d2-9cab-44207ad0e0c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.321028729Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=f6ff0057-6b99-4df5-9aca-bca9adc94791 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.321868157Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=9722b5ff-a4b3-445c-b3c4-2f4e55341b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.322453627Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=b61d672b-0949-4316-8a7f-4071c86b03d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.322949111Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=450c82e8-dfb9-4142-8b40-08c4b80c0ef4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.32354821Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=0920e0ed-8fd6-46c5-8404-88b74f388f67 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.324043932Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=51273fc0-dd5c-4357-b908-13dc96e1efa4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.459978374Z" level=info msg="Checking image status: kicbase/echo-server:functional-253997" id=6abefe2e-7521-47b1-8899-b8d6c83e2925 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.460212969Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.460270479Z" level=info msg="Image kicbase/echo-server:functional-253997 not found" id=6abefe2e-7521-47b1-8899-b8d6c83e2925 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.460359562Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-253997 found" id=6abefe2e-7521-47b1-8899-b8d6c83e2925 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.489569084Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-253997" id=c064dc43-1761-4769-858b-3d0714781d14 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.48976097Z" level=info msg="Image docker.io/kicbase/echo-server:functional-253997 not found" id=c064dc43-1761-4769-858b-3d0714781d14 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.489809939Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-253997 found" id=c064dc43-1761-4769-858b-3d0714781d14 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.516505993Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-253997" id=3c030604-ce2f-430e-a653-8b9e05d5ab07 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.516688664Z" level=info msg="Image localhost/kicbase/echo-server:functional-253997 not found" id=3c030604-ce2f-430e-a653-8b9e05d5ab07 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.516749818Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-253997 found" id=3c030604-ce2f-430e-a653-8b9e05d5ab07 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.600810727Z" level=info msg="Checking image status: kicbase/echo-server:functional-253997" id=bd6fd163-7760-4a60-9aff-2fd20844b643 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.60100304Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.601080703Z" level=info msg="Image kicbase/echo-server:functional-253997 not found" id=bd6fd163-7760-4a60-9aff-2fd20844b643 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.601165109Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-253997 found" id=bd6fd163-7760-4a60-9aff-2fd20844b643 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.627729577Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-253997" id=7b3f3dee-677b-459d-a95f-b7641deaed83 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.627884491Z" level=info msg="Image docker.io/kicbase/echo-server:functional-253997 not found" id=7b3f3dee-677b-459d-a95f-b7641deaed83 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.627924031Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-253997 found" id=7b3f3dee-677b-459d-a95f-b7641deaed83 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.655585072Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-253997" id=88ea8f87-00ed-4963-a631-733a1588f433 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:50:54.136103   26002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:54.136784   26002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:54.138273   26002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:54.138748   26002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:50:54.140273   26002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 03:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015165] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514331] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032961] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807794] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.310854] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 04:51] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000001f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000bd7dad86{9P.session} n=00000000db0bc116
	[  +0.001142] FS-Cache: O-key=[10] '34323936323939323530'
	[  +0.000773] FS-Cache: N-cookie c=00000020 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000bd7dad86{9P.session} n=00000000040fb2e3
	[  +0.001079] FS-Cache: N-key=[10] '34323936323939323530'
	[Dec10 04:56] hrtimer: interrupt took 8286122 ns
	[Dec10 06:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 06:10] overlayfs: idmapped layers are currently not supported
	[ +21.611451] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 06:15] overlayfs: idmapped layers are currently not supported
	[Dec10 06:16] overlayfs: idmapped layers are currently not supported
	[Dec10 06:19] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:50:54 up  3:33,  0 user,  load average: 0.15, 0.23, 0.42
	Linux functional-253997 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:50:51 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:50:52 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 971.
	Dec 10 06:50:52 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:52 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:52 functional-253997 kubelet[25875]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:50:52 functional-253997 kubelet[25875]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:50:52 functional-253997 kubelet[25875]: E1210 06:50:52.419896   25875 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:50:52 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:50:52 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:50:53 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 972.
	Dec 10 06:50:53 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:53 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:53 functional-253997 kubelet[25893]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:50:53 functional-253997 kubelet[25893]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:50:53 functional-253997 kubelet[25893]: E1210 06:50:53.164329   25893 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:50:53 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:50:53 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:50:53 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 973.
	Dec 10 06:50:53 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:53 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:53 functional-253997 kubelet[25954]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:50:53 functional-253997 kubelet[25954]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:50:53 functional-253997 kubelet[25954]: E1210 06:50:53.951478   25954 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:50:53 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:50:53 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997: exit status 2 (305.697485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-253997" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (241.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (1.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-253997 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-253997 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (64.800674ms)

                                                
                                                
** stderr ** 
	E1210 06:49:05.218349  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.219989  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.221463  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.222900  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.224291  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-253997 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1210 06:49:05.218349  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.219989  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.221463  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.222900  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.224291  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1210 06:49:05.218349  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.219989  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.221463  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.222900  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.224291  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1210 06:49:05.218349  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.219989  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.221463  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.222900  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.224291  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1210 06:49:05.218349  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.219989  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.221463  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.222900  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.224291  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1210 06:49:05.218349  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.219989  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.221463  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.222900  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:49:05.224291  425907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-253997
helpers_test.go:244: (dbg) docker inspect functional-253997:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	        "Created": "2025-12-10T06:19:33.832297734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 395175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:19:33.914699563Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/hosts",
	        "LogPath": "/var/lib/docker/containers/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7/256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7-json.log",
	        "Name": "/functional-253997",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-253997:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-253997",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "256e059cdf1e61bbde32886fae049f2b5055f060d17f8227c1c0bd100035a4a7",
	                "LowerDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e902755335cc7eee64e1fda7133b6950b3ed48289fbcf62a0e92a916ca2e0536/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-253997",
	                "Source": "/var/lib/docker/volumes/functional-253997/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-253997",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-253997",
	                "name.minikube.sigs.k8s.io": "functional-253997",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "484c42edbd556e17e2c5a836571d3275bc9cbd157b70c100e08a5b73322a62e6",
	            "SandboxKey": "/var/run/docker/netns/484c42edbd55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-253997": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:ba:52:c5:6e:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fc5aadc020d5981cfdab05726de722ae81d653cbc30b66014568069657370fe7",
	                    "EndpointID": "9ecd4882ea5c9fe5581b68ad15a0a2f450e34777aee426fcc158008c81076e5a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-253997",
	                        "256e059cdf1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-253997 -n functional-253997: exit status 2 (324.910246ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4096302824/001:/mount2 --alsologtostderr -v=1                      │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ mount     │ -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4096302824/001:/mount3 --alsologtostderr -v=1                      │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ ssh       │ functional-253997 ssh findmnt -T /mount1                                                                                                                  │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh       │ functional-253997 ssh findmnt -T /mount2                                                                                                                  │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh       │ functional-253997 ssh findmnt -T /mount3                                                                                                                  │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ mount     │ -p functional-253997 --kill=true                                                                                                                          │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ start     │ -p functional-253997 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1               │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ start     │ -p functional-253997 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1               │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ start     │ -p functional-253997 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                         │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-253997 --alsologtostderr -v=1                                                                                            │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ license   │                                                                                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ ssh       │ functional-253997 ssh sudo systemctl is-active docker                                                                                                     │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ ssh       │ functional-253997 ssh sudo systemctl is-active containerd                                                                                                 │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │                     │
	│ image     │ functional-253997 image load --daemon kicbase/echo-server:functional-253997 --alsologtostderr                                                             │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:49 UTC │
	│ image     │ functional-253997 image ls                                                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image     │ functional-253997 image load --daemon kicbase/echo-server:functional-253997 --alsologtostderr                                                             │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image     │ functional-253997 image ls                                                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image     │ functional-253997 image load --daemon kicbase/echo-server:functional-253997 --alsologtostderr                                                             │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image     │ functional-253997 image ls                                                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image     │ functional-253997 image save kicbase/echo-server:functional-253997 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image     │ functional-253997 image rm kicbase/echo-server:functional-253997 --alsologtostderr                                                                        │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image     │ functional-253997 image ls                                                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image     │ functional-253997 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image     │ functional-253997 image ls                                                                                                                                │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image     │ functional-253997 image save --daemon kicbase/echo-server:functional-253997 --alsologtostderr                                                             │ functional-253997 │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:48:56
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:48:56.685450  424691 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:48:56.685567  424691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:48:56.685574  424691 out.go:374] Setting ErrFile to fd 2...
	I1210 06:48:56.685579  424691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:48:56.686200  424691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:48:56.686646  424691 out.go:368] Setting JSON to false
	I1210 06:48:56.687478  424691 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12689,"bootTime":1765336648,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:48:56.687545  424691 start.go:143] virtualization:  
	I1210 06:48:56.690677  424691 out.go:179] * [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:48:56.694538  424691 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:48:56.694730  424691 notify.go:221] Checking for updates...
	I1210 06:48:56.700184  424691 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:48:56.703110  424691 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:48:56.706066  424691 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:48:56.709290  424691 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:48:56.712264  424691 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:48:56.715707  424691 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:48:56.716347  424691 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:48:56.743289  424691 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:48:56.743441  424691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:48:56.803116  424691 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:48:56.793157147 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:48:56.803230  424691 docker.go:319] overlay module found
	I1210 06:48:56.806465  424691 out.go:179] * Using the docker driver based on existing profile
	I1210 06:48:56.809442  424691 start.go:309] selected driver: docker
	I1210 06:48:56.809468  424691 start.go:927] validating driver "docker" against &{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:48:56.809571  424691 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:48:56.809682  424691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:48:56.863715  424691 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:48:56.854522382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:48:56.864156  424691 cni.go:84] Creating CNI manager for ""
	I1210 06:48:56.864220  424691 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:48:56.864260  424691 start.go:353] cluster config:
	{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:48:56.867300  424691 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.320018913Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=2592a99c-52fd-46d2-9cab-44207ad0e0c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.321028729Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=f6ff0057-6b99-4df5-9aca-bca9adc94791 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.321868157Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=9722b5ff-a4b3-445c-b3c4-2f4e55341b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.322453627Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=b61d672b-0949-4316-8a7f-4071c86b03d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.322949111Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=450c82e8-dfb9-4142-8b40-08c4b80c0ef4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.32354821Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=0920e0ed-8fd6-46c5-8404-88b74f388f67 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:42:42 functional-253997 crio[10563]: time="2025-12-10T06:42:42.324043932Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=51273fc0-dd5c-4357-b908-13dc96e1efa4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.459978374Z" level=info msg="Checking image status: kicbase/echo-server:functional-253997" id=6abefe2e-7521-47b1-8899-b8d6c83e2925 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.460212969Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.460270479Z" level=info msg="Image kicbase/echo-server:functional-253997 not found" id=6abefe2e-7521-47b1-8899-b8d6c83e2925 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.460359562Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-253997 found" id=6abefe2e-7521-47b1-8899-b8d6c83e2925 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.489569084Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-253997" id=c064dc43-1761-4769-858b-3d0714781d14 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.48976097Z" level=info msg="Image docker.io/kicbase/echo-server:functional-253997 not found" id=c064dc43-1761-4769-858b-3d0714781d14 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.489809939Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-253997 found" id=c064dc43-1761-4769-858b-3d0714781d14 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.516505993Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-253997" id=3c030604-ce2f-430e-a653-8b9e05d5ab07 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.516688664Z" level=info msg="Image localhost/kicbase/echo-server:functional-253997 not found" id=3c030604-ce2f-430e-a653-8b9e05d5ab07 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:00 functional-253997 crio[10563]: time="2025-12-10T06:49:00.516749818Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-253997 found" id=3c030604-ce2f-430e-a653-8b9e05d5ab07 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.600810727Z" level=info msg="Checking image status: kicbase/echo-server:functional-253997" id=bd6fd163-7760-4a60-9aff-2fd20844b643 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.60100304Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.601080703Z" level=info msg="Image kicbase/echo-server:functional-253997 not found" id=bd6fd163-7760-4a60-9aff-2fd20844b643 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.601165109Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-253997 found" id=bd6fd163-7760-4a60-9aff-2fd20844b643 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.627729577Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-253997" id=7b3f3dee-677b-459d-a95f-b7641deaed83 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.627884491Z" level=info msg="Image docker.io/kicbase/echo-server:functional-253997 not found" id=7b3f3dee-677b-459d-a95f-b7641deaed83 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.627924031Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-253997 found" id=7b3f3dee-677b-459d-a95f-b7641deaed83 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:49:03 functional-253997 crio[10563]: time="2025-12-10T06:49:03.655585072Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-253997" id=88ea8f87-00ed-4963-a631-733a1588f433 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:49:06.192398   24611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:49:06.193244   24611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:49:06.194785   24611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:49:06.195224   24611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:49:06.196727   24611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 03:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015165] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.514331] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032961] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.807794] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.310854] kauditd_printk_skb: 39 callbacks suppressed
	[Dec10 04:51] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000001f [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000bd7dad86{9P.session} n=00000000db0bc116
	[  +0.001142] FS-Cache: O-key=[10] '34323936323939323530'
	[  +0.000773] FS-Cache: N-cookie c=00000020 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000bd7dad86{9P.session} n=00000000040fb2e3
	[  +0.001079] FS-Cache: N-key=[10] '34323936323939323530'
	[Dec10 04:56] hrtimer: interrupt took 8286122 ns
	[Dec10 06:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 06:10] overlayfs: idmapped layers are currently not supported
	[ +21.611451] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 06:15] overlayfs: idmapped layers are currently not supported
	[Dec10 06:16] overlayfs: idmapped layers are currently not supported
	[Dec10 06:19] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 06:49:06 up  3:31,  0 user,  load average: 0.59, 0.28, 0.45
	Linux functional-253997 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:49:03 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:49:04 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 10 06:49:04 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:49:04 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:49:04 functional-253997 kubelet[24450]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:49:04 functional-253997 kubelet[24450]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:49:04 functional-253997 kubelet[24450]: E1210 06:49:04.434379   24450 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:49:04 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:49:04 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:49:05 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 10 06:49:05 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:49:05 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:49:05 functional-253997 kubelet[24501]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:49:05 functional-253997 kubelet[24501]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:49:05 functional-253997 kubelet[24501]: E1210 06:49:05.110611   24501 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:49:05 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:49:05 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:49:05 functional-253997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 10 06:49:05 functional-253997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:49:05 functional-253997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:49:05 functional-253997 kubelet[24547]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:49:05 functional-253997 kubelet[24547]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 06:49:05 functional-253997 kubelet[24547]: E1210 06:49:05.948534   24547 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:49:05 functional-253997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:49:05 functional-253997 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-253997 -n functional-253997: exit status 2 (314.67244ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-253997" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (1.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-253997 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-253997 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1210 06:46:51.946605  420405 out.go:360] Setting OutFile to fd 1 ...
I1210 06:46:51.949365  420405 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:46:51.949385  420405 out.go:374] Setting ErrFile to fd 2...
I1210 06:46:51.949393  420405 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:46:51.949739  420405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
I1210 06:46:51.950175  420405 mustload.go:66] Loading cluster: functional-253997
I1210 06:46:51.952022  420405 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:46:51.952703  420405 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
I1210 06:46:51.994722  420405 host.go:66] Checking if "functional-253997" exists ...
I1210 06:46:51.995044  420405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 06:46:52.111287  420405 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:46:52.100086017 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1210 06:46:52.111408  420405 api_server.go:166] Checking apiserver status ...
I1210 06:46:52.111474  420405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1210 06:46:52.111525  420405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
I1210 06:46:52.147157  420405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
W1210 06:46:52.282973  420405 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1210 06:46:52.286071  420405 out.go:179] * The control-plane node functional-253997 apiserver is not running: (state=Stopped)
I1210 06:46:52.289881  420405 out.go:179]   To start a cluster, run: "minikube start -p functional-253997"

                                                
                                                
stdout: * The control-plane node functional-253997 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-253997"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-253997 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-253997 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-253997 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-253997 tunnel --alsologtostderr] ...
helpers_test.go:520: unable to terminate pid 420404: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-253997 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-253997 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-253997 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-253997 apply -f testdata/testsvc.yaml: exit status 1 (116.979495ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-253997 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (106.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.97.8.47": Temporary Error: Get "http://10.97.8.47": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-253997 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-253997 get svc nginx-svc: exit status 1 (79.238781ms)

                                                
                                                
** stderr ** 
	E1210 06:48:38.938274  421508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:48:38.939913  421508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:48:38.941539  421508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:48:38.943043  421508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1210 06:48:38.944517  421508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-253997 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (106.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-253997 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-253997 create deployment hello-node --image kicbase/echo-server: exit status 1 (58.085528ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-253997 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 service list: exit status 103 (265.992569ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-253997 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-253997"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-253997 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-253997 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-253997\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 service list -o json: exit status 103 (291.607622ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-253997 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-253997"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-253997 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 service --namespace=default --https --url hello-node: exit status 103 (261.91751ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-253997 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-253997"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-253997 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 service hello-node --url --format={{.IP}}: exit status 103 (289.0037ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-253997 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-253997"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-253997 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-253997 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-253997\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 service hello-node --url: exit status 103 (261.510115ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-253997 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-253997"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-253997 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-253997 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-253997"
functional_test.go:1579: failed to parse "* The control-plane node functional-253997 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-253997\"": parse "* The control-plane node functional-253997 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-253997\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2217133637/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765349326704343820" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2217133637/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765349326704343820" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2217133637/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765349326704343820" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2217133637/001/test-1765349326704343820
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (340.753388ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:48:47.045360  364265 retry.go:31] will retry after 487.520812ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 06:48 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 06:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 06:48 test-1765349326704343820
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh cat /mount-9p/test-1765349326704343820
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-253997 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-253997 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (54.645756ms)

                                                
                                                
** stderr ** 
	E1210 06:48:48.439052  423078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	error: unable to recognize "testdata/busybox-mount-test.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-253997 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (276.170746ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=40555)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 10 06:48 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 10 06:48 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 10 06:48 test-1765349326704343820
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-253997 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2217133637/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2217133637/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2217133637/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:40555
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2217133637/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2217133637/001:/mount-9p --alsologtostderr -v=1] stderr:
I1210 06:48:46.771782  422738 out.go:360] Setting OutFile to fd 1 ...
I1210 06:48:46.771946  422738 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:48:46.771959  422738 out.go:374] Setting ErrFile to fd 2...
I1210 06:48:46.771965  422738 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:48:46.772325  422738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
I1210 06:48:46.772638  422738 mustload.go:66] Loading cluster: functional-253997
I1210 06:48:46.773457  422738 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:48:46.774018  422738 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
I1210 06:48:46.800934  422738 host.go:66] Checking if "functional-253997" exists ...
I1210 06:48:46.801306  422738 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 06:48:46.904510  422738 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:48:46.894501096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1210 06:48:46.904678  422738 cli_runner.go:164] Run: docker network inspect functional-253997 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 06:48:46.928265  422738 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2217133637/001 into VM as /mount-9p ...
I1210 06:48:46.935420  422738 out.go:179]   - Mount type:   9p
I1210 06:48:46.938304  422738 out.go:179]   - User ID:      docker
I1210 06:48:46.941254  422738 out.go:179]   - Group ID:     docker
I1210 06:48:46.945326  422738 out.go:179]   - Version:      9p2000.L
I1210 06:48:46.948247  422738 out.go:179]   - Message Size: 262144
I1210 06:48:46.951099  422738 out.go:179]   - Options:      map[]
I1210 06:48:46.954080  422738 out.go:179]   - Bind Address: 192.168.49.1:40555
I1210 06:48:46.956872  422738 out.go:179] * Userspace file server: 
I1210 06:48:46.957233  422738 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1210 06:48:46.957327  422738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
I1210 06:48:46.975687  422738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
I1210 06:48:47.084385  422738 mount.go:180] unmount for /mount-9p ran successfully
I1210 06:48:47.084421  422738 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1210 06:48:47.092849  422738 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=40555,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1210 06:48:47.103722  422738 main.go:127] stdlog: ufs.go:141 connected
I1210 06:48:47.103895  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tversion tag 65535 msize 262144 version '9P2000.L'
I1210 06:48:47.103943  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rversion tag 65535 msize 262144 version '9P2000'
I1210 06:48:47.104181  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1210 06:48:47.104248  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rattach tag 0 aqid (44363 704ff6a 'd')
I1210 06:48:47.105060  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 0
I1210 06:48:47.105138  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44363 704ff6a 'd') m d775 at 0 mt 1765349326 l 4096 t 0 d 0 ext )
I1210 06:48:47.110385  422738 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/.mount-process: {Name:mkd9e16b35a2b948a62cd529f15bafd8eeabe081 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 06:48:47.110564  422738 mount.go:105] mount successful: ""
I1210 06:48:47.114031  422738 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2217133637/001 to /mount-9p
I1210 06:48:47.116838  422738 out.go:203] 
I1210 06:48:47.119729  422738 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1210 06:48:48.085728  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 0
I1210 06:48:48.085807  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44363 704ff6a 'd') m d775 at 0 mt 1765349326 l 4096 t 0 d 0 ext )
I1210 06:48:48.086169  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Twalk tag 0 fid 0 newfid 1 
I1210 06:48:48.086207  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rwalk tag 0 
I1210 06:48:48.086354  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Topen tag 0 fid 1 mode 0
I1210 06:48:48.086403  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Ropen tag 0 qid (44363 704ff6a 'd') iounit 0
I1210 06:48:48.086538  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 0
I1210 06:48:48.086576  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44363 704ff6a 'd') m d775 at 0 mt 1765349326 l 4096 t 0 d 0 ext )
I1210 06:48:48.086744  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tread tag 0 fid 1 offset 0 count 262120
I1210 06:48:48.086870  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rread tag 0 count 258
I1210 06:48:48.087016  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tread tag 0 fid 1 offset 258 count 261862
I1210 06:48:48.087048  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rread tag 0 count 0
I1210 06:48:48.087190  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tread tag 0 fid 1 offset 258 count 262120
I1210 06:48:48.087219  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rread tag 0 count 0
I1210 06:48:48.087406  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1210 06:48:48.087461  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rwalk tag 0 (4436b 704ff6a '') 
I1210 06:48:48.087600  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 2
I1210 06:48:48.087652  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4436b 704ff6a '') m 644 at 0 mt 1765349326 l 24 t 0 d 0 ext )
I1210 06:48:48.087812  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 2
I1210 06:48:48.087864  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4436b 704ff6a '') m 644 at 0 mt 1765349326 l 24 t 0 d 0 ext )
I1210 06:48:48.088004  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tclunk tag 0 fid 2
I1210 06:48:48.088041  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rclunk tag 0
I1210 06:48:48.088206  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Twalk tag 0 fid 0 newfid 2 0:'test-1765349326704343820' 
I1210 06:48:48.088251  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rwalk tag 0 (4436d 704ff6a '') 
I1210 06:48:48.088381  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 2
I1210 06:48:48.088418  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('test-1765349326704343820' 'jenkins' 'jenkins' '' q (4436d 704ff6a '') m 644 at 0 mt 1765349326 l 24 t 0 d 0 ext )
I1210 06:48:48.088546  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 2
I1210 06:48:48.088588  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('test-1765349326704343820' 'jenkins' 'jenkins' '' q (4436d 704ff6a '') m 644 at 0 mt 1765349326 l 24 t 0 d 0 ext )
I1210 06:48:48.088776  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tclunk tag 0 fid 2
I1210 06:48:48.088811  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rclunk tag 0
I1210 06:48:48.088969  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1210 06:48:48.089012  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rwalk tag 0 (4436c 704ff6a '') 
I1210 06:48:48.089139  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 2
I1210 06:48:48.089178  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4436c 704ff6a '') m 644 at 0 mt 1765349326 l 24 t 0 d 0 ext )
I1210 06:48:48.089320  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 2
I1210 06:48:48.089358  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4436c 704ff6a '') m 644 at 0 mt 1765349326 l 24 t 0 d 0 ext )
I1210 06:48:48.089483  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tclunk tag 0 fid 2
I1210 06:48:48.089505  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rclunk tag 0
I1210 06:48:48.089640  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tread tag 0 fid 1 offset 258 count 262120
I1210 06:48:48.089682  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rread tag 0 count 0
I1210 06:48:48.089838  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tclunk tag 0 fid 1
I1210 06:48:48.089889  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rclunk tag 0
I1210 06:48:48.373381  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Twalk tag 0 fid 0 newfid 1 0:'test-1765349326704343820' 
I1210 06:48:48.373458  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rwalk tag 0 (4436d 704ff6a '') 
I1210 06:48:48.373634  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 1
I1210 06:48:48.373678  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('test-1765349326704343820' 'jenkins' 'jenkins' '' q (4436d 704ff6a '') m 644 at 0 mt 1765349326 l 24 t 0 d 0 ext )
I1210 06:48:48.373822  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Twalk tag 0 fid 1 newfid 2 
I1210 06:48:48.373850  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rwalk tag 0 
I1210 06:48:48.373982  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Topen tag 0 fid 2 mode 0
I1210 06:48:48.374033  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Ropen tag 0 qid (4436d 704ff6a '') iounit 0
I1210 06:48:48.374161  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 1
I1210 06:48:48.374211  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('test-1765349326704343820' 'jenkins' 'jenkins' '' q (4436d 704ff6a '') m 644 at 0 mt 1765349326 l 24 t 0 d 0 ext )
I1210 06:48:48.374370  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tread tag 0 fid 2 offset 0 count 262120
I1210 06:48:48.374418  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rread tag 0 count 24
I1210 06:48:48.374549  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tread tag 0 fid 2 offset 24 count 262120
I1210 06:48:48.374580  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rread tag 0 count 0
I1210 06:48:48.374736  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tread tag 0 fid 2 offset 24 count 262120
I1210 06:48:48.374785  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rread tag 0 count 0
I1210 06:48:48.374993  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tclunk tag 0 fid 2
I1210 06:48:48.375034  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rclunk tag 0
I1210 06:48:48.375205  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tclunk tag 0 fid 1
I1210 06:48:48.375238  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rclunk tag 0
I1210 06:48:48.708242  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 0
I1210 06:48:48.708322  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44363 704ff6a 'd') m d775 at 0 mt 1765349326 l 4096 t 0 d 0 ext )
I1210 06:48:48.708722  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Twalk tag 0 fid 0 newfid 1 
I1210 06:48:48.708770  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rwalk tag 0 
I1210 06:48:48.708928  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Topen tag 0 fid 1 mode 0
I1210 06:48:48.708979  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Ropen tag 0 qid (44363 704ff6a 'd') iounit 0
I1210 06:48:48.709122  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 0
I1210 06:48:48.709159  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44363 704ff6a 'd') m d775 at 0 mt 1765349326 l 4096 t 0 d 0 ext )
I1210 06:48:48.709338  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tread tag 0 fid 1 offset 0 count 262120
I1210 06:48:48.709451  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rread tag 0 count 258
I1210 06:48:48.709585  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tread tag 0 fid 1 offset 258 count 261862
I1210 06:48:48.709614  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rread tag 0 count 0
I1210 06:48:48.709767  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tread tag 0 fid 1 offset 258 count 262120
I1210 06:48:48.709811  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rread tag 0 count 0
I1210 06:48:48.709972  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1210 06:48:48.710008  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rwalk tag 0 (4436b 704ff6a '') 
I1210 06:48:48.710132  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 2
I1210 06:48:48.710168  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4436b 704ff6a '') m 644 at 0 mt 1765349326 l 24 t 0 d 0 ext )
I1210 06:48:48.710328  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 2
I1210 06:48:48.710363  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4436b 704ff6a '') m 644 at 0 mt 1765349326 l 24 t 0 d 0 ext )
I1210 06:48:48.710494  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tclunk tag 0 fid 2
I1210 06:48:48.710521  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rclunk tag 0
I1210 06:48:48.710680  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Twalk tag 0 fid 0 newfid 2 0:'test-1765349326704343820' 
I1210 06:48:48.710719  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rwalk tag 0 (4436d 704ff6a '') 
I1210 06:48:48.710842  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 2
I1210 06:48:48.710873  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('test-1765349326704343820' 'jenkins' 'jenkins' '' q (4436d 704ff6a '') m 644 at 0 mt 1765349326 l 24 t 0 d 0 ext )
I1210 06:48:48.711010  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 2
I1210 06:48:48.711039  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('test-1765349326704343820' 'jenkins' 'jenkins' '' q (4436d 704ff6a '') m 644 at 0 mt 1765349326 l 24 t 0 d 0 ext )
I1210 06:48:48.711174  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tclunk tag 0 fid 2
I1210 06:48:48.711199  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rclunk tag 0
I1210 06:48:48.711348  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1210 06:48:48.711383  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rwalk tag 0 (4436c 704ff6a '') 
I1210 06:48:48.711497  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 2
I1210 06:48:48.711534  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4436c 704ff6a '') m 644 at 0 mt 1765349326 l 24 t 0 d 0 ext )
I1210 06:48:48.711670  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tstat tag 0 fid 2
I1210 06:48:48.711712  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4436c 704ff6a '') m 644 at 0 mt 1765349326 l 24 t 0 d 0 ext )
I1210 06:48:48.711844  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tclunk tag 0 fid 2
I1210 06:48:48.711865  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rclunk tag 0
I1210 06:48:48.712035  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tread tag 0 fid 1 offset 258 count 262120
I1210 06:48:48.712060  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rread tag 0 count 0
I1210 06:48:48.712207  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tclunk tag 0 fid 1
I1210 06:48:48.712234  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rclunk tag 0
I1210 06:48:48.713471  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1210 06:48:48.713535  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rerror tag 0 ename 'file not found' ecode 0
I1210 06:48:48.992486  422738 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39766 Tclunk tag 0 fid 0
I1210 06:48:48.992542  422738 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39766 Rclunk tag 0
I1210 06:48:48.993678  422738 main.go:127] stdlog: ufs.go:147 disconnected
I1210 06:48:49.016722  422738 out.go:179] * Unmounting /mount-9p ...
I1210 06:48:49.019676  422738 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1210 06:48:49.027235  422738 mount.go:180] unmount for /mount-9p ran successfully
I1210 06:48:49.027346  422738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/.mount-process: {Name:mkd9e16b35a2b948a62cd529f15bafd8eeabe081 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 06:48:49.030541  422738 out.go:203] 
W1210 06:48:49.033551  422738 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1210 06:48:49.036449  422738 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.41s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.4s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-933033 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-933033 --output=json --user=testUser: exit status 80 (2.40197965s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"85aac3a7-3411-4c6b-baa9-f081e10ff52d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-933033 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"cfdb958a-9a11-449f-b1b3-d13477209add","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-10T07:02:57Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"9af6ebd3-fedc-4581-8289-46ab5bb4bff3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-933033 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.40s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.77s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-933033 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-933033 --output=json --user=testUser: exit status 80 (1.769289154s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"64f91ccf-d713-4f55-9ce7-b2c44c078e0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-933033 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"2f08a18c-e5c7-4a8d-8232-45bc91b02b72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-10T07:02:59Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"ea7c6cb0-79fc-40c9-bf56-65ecdb8abe6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-933033 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (806.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-943140 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-943140 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.144466451s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-943140
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-943140: (1.479624563s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-943140 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-943140 status --format={{.Host}}: exit status 7 (98.926566ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-943140 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-943140 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m37.771189917s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-943140] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-943140" primary control-plane node in "kubernetes-upgrade-943140" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:21:24.970683  557955 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:21:24.970793  557955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:21:24.970799  557955 out.go:374] Setting ErrFile to fd 2...
	I1210 07:21:24.970804  557955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:21:24.971196  557955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 07:21:24.971619  557955 out.go:368] Setting JSON to false
	I1210 07:21:24.972544  557955 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":14637,"bootTime":1765336648,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 07:21:24.972663  557955 start.go:143] virtualization:  
	I1210 07:21:24.983685  557955 out.go:179] * [kubernetes-upgrade-943140] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:21:24.986722  557955 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:21:24.986781  557955 notify.go:221] Checking for updates...
	I1210 07:21:24.993556  557955 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:21:24.996454  557955 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 07:21:24.999254  557955 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 07:21:25.002210  557955 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:21:25.005996  557955 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:21:25.009836  557955 config.go:182] Loaded profile config "kubernetes-upgrade-943140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 07:21:25.010562  557955 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:21:25.063494  557955 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:21:25.063620  557955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:21:25.168462  557955 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 07:21:25.157600564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:21:25.168562  557955 docker.go:319] overlay module found
	I1210 07:21:25.171657  557955 out.go:179] * Using the docker driver based on existing profile
	I1210 07:21:25.174691  557955 start.go:309] selected driver: docker
	I1210 07:21:25.174731  557955 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-943140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-943140 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:21:25.174821  557955 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:21:25.175521  557955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:21:25.271205  557955 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 07:21:25.259710821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:21:25.271530  557955 cni.go:84] Creating CNI manager for ""
	I1210 07:21:25.271583  557955 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:21:25.271623  557955 start.go:353] cluster config:
	{Name:kubernetes-upgrade-943140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-943140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:21:25.274868  557955 out.go:179] * Starting "kubernetes-upgrade-943140" primary control-plane node in "kubernetes-upgrade-943140" cluster
	I1210 07:21:25.277641  557955 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:21:25.280577  557955 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:21:25.283517  557955 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 07:21:25.283711  557955 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:21:25.306277  557955 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:21:25.306296  557955 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:21:25.339924  557955 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 07:21:25.496428  557955 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 07:21:25.496605  557955 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/kubernetes-upgrade-943140/config.json ...
	I1210 07:21:25.496881  557955 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:21:25.496920  557955 start.go:360] acquireMachinesLock for kubernetes-upgrade-943140: {Name:mke2d16e191e9380188869fe0bba20f2b8aaedfa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:21:25.496994  557955 start.go:364] duration metric: took 42.347µs to acquireMachinesLock for "kubernetes-upgrade-943140"
	I1210 07:21:25.497014  557955 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:21:25.497020  557955 fix.go:54] fixHost starting: 
	I1210 07:21:25.497395  557955 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-943140 --format={{.State.Status}}
	I1210 07:21:25.497821  557955 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:21:25.571009  557955 fix.go:112] recreateIfNeeded on kubernetes-upgrade-943140: state=Stopped err=<nil>
	W1210 07:21:25.571053  557955 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:21:25.574923  557955 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-943140" ...
	I1210 07:21:25.575052  557955 cli_runner.go:164] Run: docker start kubernetes-upgrade-943140
	I1210 07:21:25.794803  557955 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:21:25.997567  557955 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:21:26.016363  557955 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-943140 --format={{.State.Status}}
	I1210 07:21:26.076666  557955 kic.go:430] container "kubernetes-upgrade-943140" state is running.
	I1210 07:21:26.078642  557955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-943140
	I1210 07:21:26.123226  557955 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/kubernetes-upgrade-943140/config.json ...
	I1210 07:21:26.123469  557955 machine.go:94] provisionDockerMachine start ...
	I1210 07:21:26.123585  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:26.149318  557955 main.go:143] libmachine: Using SSH client type: native
	I1210 07:21:26.149659  557955 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1210 07:21:26.149674  557955 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:21:26.153800  557955 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:21:26.207702  557955 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:21:26.207807  557955 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:21:26.207822  557955 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 141.327µs
	I1210 07:21:26.207831  557955 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:21:26.207848  557955 cache.go:107] acquiring lock: {Name:mk8250dc655f821bf2674ed77a7683798ede4f4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:21:26.207894  557955 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 07:21:26.207909  557955 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 58.265µs
	I1210 07:21:26.207916  557955 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 07:21:26.207926  557955 cache.go:107] acquiring lock: {Name:mk1c0334e51772e4f6b3429cc05b91ec19d54b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:21:26.207964  557955 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 07:21:26.207972  557955 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 47.41µs
	I1210 07:21:26.207984  557955 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 07:21:26.208004  557955 cache.go:107] acquiring lock: {Name:mk41aaba55a3296a937cf380d44dc9920df69546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:21:26.208037  557955 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 07:21:26.208047  557955 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 43.833µs
	I1210 07:21:26.208053  557955 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 07:21:26.208062  557955 cache.go:107] acquiring lock: {Name:mk9268471d41d450748d4b0133d1a043378776f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:21:26.208093  557955 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 07:21:26.208102  557955 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 40.821µs
	I1210 07:21:26.208108  557955 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 07:21:26.208116  557955 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:21:26.208147  557955 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:21:26.208156  557955 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.698µs
	I1210 07:21:26.208163  557955 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:21:26.208171  557955 cache.go:107] acquiring lock: {Name:mkc2831727d74e5fadb1c00bd89f0236b385200e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:21:26.208201  557955 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 07:21:26.208210  557955 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 39.278µs
	I1210 07:21:26.208215  557955 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 07:21:26.208229  557955 cache.go:107] acquiring lock: {Name:mk7e052900e41419dc78d3311f27db508a8f64b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:21:26.208264  557955 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 07:21:26.208273  557955 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 44.587µs
	I1210 07:21:26.208279  557955 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 07:21:26.208288  557955 cache.go:87] Successfully saved all images to host disk.
	I1210 07:21:29.318599  557955 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-943140
	
	I1210 07:21:29.318628  557955 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-943140"
	I1210 07:21:29.318716  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:29.342713  557955 main.go:143] libmachine: Using SSH client type: native
	I1210 07:21:29.343072  557955 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1210 07:21:29.343091  557955 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-943140 && echo "kubernetes-upgrade-943140" | sudo tee /etc/hostname
	I1210 07:21:29.528971  557955 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-943140
	
	I1210 07:21:29.529096  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:29.552695  557955 main.go:143] libmachine: Using SSH client type: native
	I1210 07:21:29.553011  557955 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1210 07:21:29.553027  557955 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-943140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-943140/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-943140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:21:29.718310  557955 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:21:29.718340  557955 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-362392/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-362392/.minikube}
	I1210 07:21:29.718402  557955 ubuntu.go:190] setting up certificates
	I1210 07:21:29.718418  557955 provision.go:84] configureAuth start
	I1210 07:21:29.718505  557955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-943140
	I1210 07:21:29.740390  557955 provision.go:143] copyHostCerts
	I1210 07:21:29.740467  557955 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem, removing ...
	I1210 07:21:29.740477  557955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 07:21:29.740556  557955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem (1078 bytes)
	I1210 07:21:29.740688  557955 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem, removing ...
	I1210 07:21:29.740694  557955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 07:21:29.740724  557955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem (1123 bytes)
	I1210 07:21:29.740791  557955 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem, removing ...
	I1210 07:21:29.740799  557955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 07:21:29.740825  557955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem (1675 bytes)
	I1210 07:21:29.740927  557955 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-943140 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-943140 localhost minikube]
	I1210 07:21:30.218638  557955 provision.go:177] copyRemoteCerts
	I1210 07:21:30.218774  557955 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:21:30.218821  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:30.240593  557955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/kubernetes-upgrade-943140/id_rsa Username:docker}
	I1210 07:21:30.350311  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:21:30.371315  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1210 07:21:30.392226  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:21:30.413662  557955 provision.go:87] duration metric: took 695.215177ms to configureAuth
	I1210 07:21:30.413731  557955 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:21:30.413940  557955 config.go:182] Loaded profile config "kubernetes-upgrade-943140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 07:21:30.414119  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:30.442932  557955 main.go:143] libmachine: Using SSH client type: native
	I1210 07:21:30.443262  557955 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1210 07:21:30.443276  557955 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:21:30.821993  557955 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:21:30.822073  557955 machine.go:97] duration metric: took 4.698588898s to provisionDockerMachine
	I1210 07:21:30.822099  557955 start.go:293] postStartSetup for "kubernetes-upgrade-943140" (driver="docker")
	I1210 07:21:30.822130  557955 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:21:30.822207  557955 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:21:30.822292  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:30.840296  557955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/kubernetes-upgrade-943140/id_rsa Username:docker}
	I1210 07:21:30.949397  557955 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:21:30.952874  557955 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:21:30.952905  557955 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:21:30.952918  557955 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/addons for local assets ...
	I1210 07:21:30.952974  557955 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/files for local assets ...
	I1210 07:21:30.953062  557955 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> 3642652.pem in /etc/ssl/certs
	I1210 07:21:30.953169  557955 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:21:30.961056  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 07:21:30.979270  557955 start.go:296] duration metric: took 157.136822ms for postStartSetup
	I1210 07:21:30.979361  557955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:21:30.979400  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:30.997616  557955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/kubernetes-upgrade-943140/id_rsa Username:docker}
	I1210 07:21:31.103115  557955 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:21:31.108500  557955 fix.go:56] duration metric: took 5.611458512s for fixHost
	I1210 07:21:31.108530  557955 start.go:83] releasing machines lock for "kubernetes-upgrade-943140", held for 5.611526828s
	I1210 07:21:31.108639  557955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-943140
	I1210 07:21:31.130177  557955 ssh_runner.go:195] Run: cat /version.json
	I1210 07:21:31.130240  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:31.130533  557955 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:21:31.130603  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:31.153342  557955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/kubernetes-upgrade-943140/id_rsa Username:docker}
	I1210 07:21:31.172267  557955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/kubernetes-upgrade-943140/id_rsa Username:docker}
	I1210 07:21:31.277735  557955 ssh_runner.go:195] Run: systemctl --version
	I1210 07:21:31.407570  557955 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:21:31.470744  557955 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:21:31.480159  557955 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:21:31.480277  557955 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:21:31.492330  557955 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:21:31.492398  557955 start.go:496] detecting cgroup driver to use...
	I1210 07:21:31.492447  557955 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:21:31.492535  557955 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:21:31.517498  557955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:21:31.533938  557955 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:21:31.534054  557955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:21:31.560868  557955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:21:31.582616  557955 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:21:31.767716  557955 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:21:32.006235  557955 docker.go:234] disabling docker service ...
	I1210 07:21:32.006414  557955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:21:32.027934  557955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:21:32.043966  557955 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:21:32.248136  557955 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:21:32.473168  557955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:21:32.499065  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:21:32.522101  557955 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:21:32.712541  557955 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:21:32.712621  557955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:21:32.727941  557955 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:21:32.728024  557955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:21:32.743904  557955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:21:32.759795  557955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:21:32.769689  557955 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:21:32.783634  557955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:21:32.795898  557955 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:21:32.810419  557955 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:21:32.826437  557955 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:21:32.846743  557955 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:21:32.860370  557955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:21:33.070448  557955 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:21:33.314016  557955 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:21:33.314118  557955 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:21:33.324925  557955 start.go:564] Will wait 60s for crictl version
	I1210 07:21:33.324999  557955 ssh_runner.go:195] Run: which crictl
	I1210 07:21:33.329735  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:21:33.373054  557955 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:21:33.373142  557955 ssh_runner.go:195] Run: crio --version
	I1210 07:21:33.438417  557955 ssh_runner.go:195] Run: crio --version
	I1210 07:21:33.502604  557955 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1210 07:21:33.505518  557955 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-943140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:21:33.529794  557955 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:21:33.537040  557955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:21:33.558473  557955 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-943140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-943140 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:21:33.558652  557955 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:21:33.759853  557955 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:21:33.972387  557955 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:21:34.155828  557955 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1210 07:21:34.155894  557955 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:21:34.227756  557955 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1210 07:21:34.227778  557955 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:21:34.227826  557955 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:21:34.228027  557955 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:21:34.228116  557955 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:21:34.228215  557955 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:21:34.228294  557955 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:21:34.228388  557955 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:21:34.228482  557955 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 07:21:34.228564  557955 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:21:34.231779  557955 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:21:34.232194  557955 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:21:34.232434  557955 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:21:34.232566  557955 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:21:34.232709  557955 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:21:34.232827  557955 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:21:34.232939  557955 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:21:34.233674  557955 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 07:21:34.542358  557955 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:21:34.544961  557955 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1210 07:21:34.570038  557955 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:21:34.574230  557955 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:21:34.635606  557955 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1210 07:21:34.635699  557955 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:21:34.635798  557955 ssh_runner.go:195] Run: which crictl
	I1210 07:21:34.640536  557955 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:21:34.660673  557955 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:21:34.676853  557955 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:21:34.704088  557955 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1210 07:21:34.704133  557955 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1210 07:21:34.704183  557955 ssh_runner.go:195] Run: which crictl
	I1210 07:21:34.807236  557955 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1210 07:21:34.807281  557955 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:21:34.807331  557955 ssh_runner.go:195] Run: which crictl
	I1210 07:21:34.807400  557955 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1210 07:21:34.807419  557955 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:21:34.807442  557955 ssh_runner.go:195] Run: which crictl
	I1210 07:21:34.807506  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:21:34.870393  557955 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 07:21:34.870434  557955 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:21:34.870481  557955 ssh_runner.go:195] Run: which crictl
	I1210 07:21:34.870547  557955 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1210 07:21:34.870571  557955 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:21:34.870598  557955 ssh_runner.go:195] Run: which crictl
	I1210 07:21:34.915872  557955 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1210 07:21:34.915911  557955 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:21:34.915964  557955 ssh_runner.go:195] Run: which crictl
	I1210 07:21:34.916053  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 07:21:34.916115  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:21:34.916175  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:21:34.916237  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:21:34.916286  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:21:34.916342  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:21:35.125200  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:21:35.125422  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:21:35.125528  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 07:21:35.125624  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:21:35.125793  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:21:35.125744  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:21:35.125881  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:21:35.353786  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:21:35.353960  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:21:35.354073  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:21:35.354195  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 07:21:35.354297  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:21:35.354386  557955 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 07:21:35.354507  557955 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 07:21:35.354620  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:21:35.494106  557955 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1210 07:21:35.494217  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1210 07:21:35.574541  557955 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 07:21:35.574729  557955 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 07:21:35.574842  557955 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1210 07:21:35.574973  557955 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1210 07:21:35.575063  557955 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 07:21:35.575168  557955 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:21:35.575278  557955 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 07:21:35.575365  557955 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 07:21:35.575475  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:21:35.575564  557955 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 07:21:35.575648  557955 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	W1210 07:21:35.575941  557955 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 07:21:35.576151  557955 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	W1210 07:21:35.590736  557955 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1210 07:21:35.590837  557955 retry.go:31] will retry after 306.128161ms: ssh: rejected: connect failed (open failed)
	I1210 07:21:35.683384  557955 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 07:21:35.683476  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1210 07:21:35.683534  557955 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1210 07:21:35.683568  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1210 07:21:35.683620  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:35.683685  557955 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:21:35.683715  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 07:21:35.683779  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:35.683621  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:35.684197  557955 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1210 07:21:35.684217  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1210 07:21:35.684266  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:35.687177  557955 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 07:21:35.687265  557955 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 07:21:35.687328  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:35.694010  557955 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1210 07:21:35.694495  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1210 07:21:35.694694  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:35.792409  557955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/kubernetes-upgrade-943140/id_rsa Username:docker}
	I1210 07:21:35.812297  557955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/kubernetes-upgrade-943140/id_rsa Username:docker}
	I1210 07:21:35.826035  557955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/kubernetes-upgrade-943140/id_rsa Username:docker}
	I1210 07:21:35.828873  557955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/kubernetes-upgrade-943140/id_rsa Username:docker}
	I1210 07:21:35.836856  557955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/kubernetes-upgrade-943140/id_rsa Username:docker}
	I1210 07:21:35.855826  557955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/kubernetes-upgrade-943140/id_rsa Username:docker}
	I1210 07:21:35.897840  557955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-943140
	I1210 07:21:35.939273  557955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/kubernetes-upgrade-943140/id_rsa Username:docker}
	I1210 07:21:36.119081  557955 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1210 07:21:36.119176  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	I1210 07:21:36.141564  557955 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 07:21:36.141691  557955 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 07:21:39.965137  557955 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (4.388938845s)
	I1210 07:21:39.965221  557955 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 07:21:39.965260  557955 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:21:39.965311  557955 ssh_runner.go:195] Run: which crictl
	I1210 07:21:39.966871  557955 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (3.825130038s)
	I1210 07:21:39.966895  557955 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1210 07:21:39.966913  557955 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:21:39.966966  557955 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1210 07:21:39.973800  557955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:21:40.250570  557955 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1210 07:21:40.250608  557955 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1210 07:21:40.250659  557955 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1210 07:21:40.250720  557955 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 07:21:40.250788  557955 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:21:42.820125  557955 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.569310123s)
	I1210 07:21:42.820168  557955 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:21:42.820201  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 07:21:42.820380  557955 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (2.569703563s)
	I1210 07:21:42.820393  557955 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1210 07:21:42.820413  557955 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 07:21:42.820466  557955 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 07:21:44.735101  557955 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: (1.914611266s)
	I1210 07:21:44.735137  557955 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1210 07:21:44.735163  557955 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 07:21:44.735220  557955 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1210 07:21:46.642711  557955 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.907462965s)
	I1210 07:21:46.642738  557955 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 07:21:46.642756  557955 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 07:21:46.642801  557955 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 07:21:48.421201  557955 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.778362006s)
	I1210 07:21:48.421229  557955 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1210 07:21:48.421249  557955 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 07:21:48.421311  557955 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 07:21:49.786100  557955 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.364766345s)
	I1210 07:21:49.786126  557955 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1210 07:21:49.786152  557955 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:21:49.786202  557955 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:21:50.397034  557955 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 07:21:50.397082  557955 cache_images.go:125] Successfully loaded all cached images
	I1210 07:21:50.397089  557955 cache_images.go:94] duration metric: took 16.169297561s to LoadCachedImages
	I1210 07:21:50.397101  557955 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1210 07:21:50.397280  557955 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-943140 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-943140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:21:50.397371  557955 ssh_runner.go:195] Run: crio config
	I1210 07:21:50.458168  557955 cni.go:84] Creating CNI manager for ""
	I1210 07:21:50.458188  557955 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:21:50.458211  557955 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:21:50.458234  557955 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-943140 NodeName:kubernetes-upgrade-943140 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:21:50.458352  557955 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-943140"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:21:50.458421  557955 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:21:50.467669  557955 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1210 07:21:50.467737  557955 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:21:50.481911  557955 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1210 07:21:50.482009  557955 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1210 07:21:50.482083  557955 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256
	I1210 07:21:50.482109  557955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:21:50.482179  557955 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:21:50.482223  557955 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1210 07:21:50.490566  557955 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1210 07:21:50.490598  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1210 07:21:50.522348  557955 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1210 07:21:50.522415  557955 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1210 07:21:50.522429  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1210 07:21:50.557467  557955 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1210 07:21:50.557555  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1210 07:21:51.670835  557955 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:21:51.679639  557955 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1210 07:21:51.693360  557955 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:21:51.707433  557955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1210 07:21:51.721210  557955 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:21:51.725660  557955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:21:51.737928  557955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:21:51.891146  557955 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:21:51.910332  557955 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/kubernetes-upgrade-943140 for IP: 192.168.76.2
	I1210 07:21:51.910360  557955 certs.go:195] generating shared ca certs ...
	I1210 07:21:51.910378  557955 certs.go:227] acquiring lock for ca certs: {Name:mk6fdc2cadbd147112797261b2432f4e1e90b685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:21:51.910518  557955 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key
	I1210 07:21:51.910573  557955 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key
	I1210 07:21:51.910586  557955 certs.go:257] generating profile certs ...
	I1210 07:21:51.910672  557955 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/kubernetes-upgrade-943140/client.key
	I1210 07:21:51.910746  557955 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/kubernetes-upgrade-943140/apiserver.key.6955e2f2
	I1210 07:21:51.910791  557955 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/kubernetes-upgrade-943140/proxy-client.key
	I1210 07:21:51.910909  557955 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem (1338 bytes)
	W1210 07:21:51.910946  557955 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265_empty.pem, impossibly tiny 0 bytes
	I1210 07:21:51.910960  557955 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 07:21:51.910994  557955 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem (1078 bytes)
	I1210 07:21:51.911023  557955 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:21:51.911051  557955 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem (1675 bytes)
	I1210 07:21:51.911098  557955 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 07:21:51.911691  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:21:51.966652  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:21:52.015843  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:21:52.047521  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:21:52.075414  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/kubernetes-upgrade-943140/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1210 07:21:52.102604  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/kubernetes-upgrade-943140/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:21:52.122551  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/kubernetes-upgrade-943140/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:21:52.144680  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/kubernetes-upgrade-943140/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:21:52.171807  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:21:52.192941  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem --> /usr/share/ca-certificates/364265.pem (1338 bytes)
	I1210 07:21:52.213117  557955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /usr/share/ca-certificates/3642652.pem (1708 bytes)
	I1210 07:21:52.237210  557955 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:21:52.253886  557955 ssh_runner.go:195] Run: openssl version
	I1210 07:21:52.261409  557955 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:21:52.269969  557955 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:21:52.278558  557955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:21:52.282965  557955 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:21:52.283034  557955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:21:52.327484  557955 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:21:52.335799  557955 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364265.pem
	I1210 07:21:52.347576  557955 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364265.pem /etc/ssl/certs/364265.pem
	I1210 07:21:52.357423  557955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364265.pem
	I1210 07:21:52.362833  557955 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 07:21:52.362898  557955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364265.pem
	I1210 07:21:52.410838  557955 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:21:52.418889  557955 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3642652.pem
	I1210 07:21:52.426807  557955 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3642652.pem /etc/ssl/certs/3642652.pem
	I1210 07:21:52.436176  557955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3642652.pem
	I1210 07:21:52.440426  557955 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 07:21:52.440491  557955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3642652.pem
	I1210 07:21:52.482970  557955 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:21:52.492205  557955 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:21:52.497805  557955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:21:52.543568  557955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:21:52.589127  557955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:21:52.632449  557955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:21:52.703382  557955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:21:52.766330  557955 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:21:52.822898  557955 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-943140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-943140 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:21:52.823008  557955 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:21:52.823072  557955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:21:52.861123  557955 cri.go:89] found id: ""
	I1210 07:21:52.861223  557955 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:21:52.870584  557955 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:21:52.870607  557955 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:21:52.870677  557955 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:21:52.879482  557955 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:21:52.879874  557955 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-943140" does not appear in /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 07:21:52.879977  557955 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-362392/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-943140" cluster setting kubeconfig missing "kubernetes-upgrade-943140" context setting]
	I1210 07:21:52.880308  557955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/kubeconfig: {Name:mk64e356b1eb31ef8982d6b594101aff5f90a6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:21:52.880844  557955 kapi.go:59] client config for kubernetes-upgrade-943140: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/kubernetes-upgrade-943140/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/kubernetes-upgrade-943140/client.key", CAFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:21:52.881594  557955 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 07:21:52.881616  557955 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 07:21:52.881622  557955 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 07:21:52.881626  557955 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 07:21:52.881630  557955 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 07:21:52.881940  557955 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:21:52.895396  557955 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 07:20:59.732342187 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 07:21:51.716937757 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-943140"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-rc.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1210 07:21:52.895419  557955 kubeadm.go:1161] stopping kube-system containers ...
	I1210 07:21:52.895431  557955 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 07:21:52.895486  557955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:21:52.923410  557955 cri.go:89] found id: ""
	I1210 07:21:52.923477  557955 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 07:21:52.938124  557955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:21:52.947519  557955 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 10 07:21 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 10 07:21 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 10 07:21 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec 10 07:21 /etc/kubernetes/scheduler.conf
	
	I1210 07:21:52.947605  557955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:21:52.957078  557955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:21:52.966696  557955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:21:52.975785  557955 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:21:52.975884  557955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:21:52.987719  557955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:21:52.996744  557955 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:21:52.996863  557955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:21:53.009588  557955 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:21:53.021252  557955 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:21:53.086097  557955 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:21:54.210049  557955 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.123917318s)
	I1210 07:21:54.210112  557955 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:21:54.479258  557955 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:21:54.571347  557955 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:21:54.642398  557955 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:21:54.642489  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:21:55.143149  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:21:55.643535  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:21:56.142627  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:21:56.643374  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:21:57.143340  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:21:57.642813  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:21:58.143522  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:21:58.642599  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:21:59.143493  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:21:59.642602  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:00.143097  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:00.643061  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:01.142678  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:01.642935  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:02.142803  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:02.642744  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:03.143003  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:03.643030  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:04.143056  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:04.643436  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:05.143154  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:05.642663  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:06.142649  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:06.642625  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:07.143174  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:07.642632  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:08.143316  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:08.643174  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:09.143415  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:09.642553  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:10.143187  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:10.642659  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:11.142840  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:11.642824  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:12.142621  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:12.642596  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:13.142771  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:13.643272  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:14.142708  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:14.642665  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:15.142947  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:15.642875  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:16.143476  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:16.643007  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:17.142603  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:17.642658  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:18.143315  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:18.643545  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:19.142722  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:19.643030  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:20.143093  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:20.642882  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:21.143472  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:21.642620  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:22.142612  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:22.642647  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:23.143021  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:23.642638  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:24.142856  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:24.643175  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:25.142831  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:25.643591  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:26.142628  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:26.643292  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:27.142839  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:27.642634  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:28.142750  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:28.642686  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:29.142785  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:29.642649  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:30.143169  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:30.642884  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:31.143177  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:31.642906  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:32.142536  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:32.642762  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:33.143158  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:33.643385  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:34.142645  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:34.642942  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:35.143185  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:35.642615  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:36.142648  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:36.642626  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:37.143607  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:37.642737  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:38.142884  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:38.643191  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:39.142679  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:39.643026  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:40.142773  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:40.643514  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:41.143155  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:41.642694  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:42.142836  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:42.643391  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:43.143182  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:43.643209  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:44.143127  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:44.642623  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:45.153432  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:45.642555  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:46.143357  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:46.643009  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:47.142613  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:47.642742  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:48.142758  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:48.642593  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:49.142645  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:49.642866  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:50.143254  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:50.642665  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:51.143323  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:51.643512  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:52.142653  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:52.642670  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:53.143084  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:53.642922  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:54.143508  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:54.642800  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:22:54.642903  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:22:54.673108  557955 cri.go:89] found id: ""
	I1210 07:22:54.673138  557955 logs.go:282] 0 containers: []
	W1210 07:22:54.673147  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:22:54.673154  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:22:54.673225  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:22:54.699407  557955 cri.go:89] found id: ""
	I1210 07:22:54.699435  557955 logs.go:282] 0 containers: []
	W1210 07:22:54.699446  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:22:54.699453  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:22:54.699513  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:22:54.728364  557955 cri.go:89] found id: ""
	I1210 07:22:54.728393  557955 logs.go:282] 0 containers: []
	W1210 07:22:54.728402  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:22:54.728409  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:22:54.728473  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:22:54.757536  557955 cri.go:89] found id: ""
	I1210 07:22:54.757566  557955 logs.go:282] 0 containers: []
	W1210 07:22:54.757576  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:22:54.757584  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:22:54.757647  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:22:54.788380  557955 cri.go:89] found id: ""
	I1210 07:22:54.788408  557955 logs.go:282] 0 containers: []
	W1210 07:22:54.788418  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:22:54.788425  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:22:54.788494  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:22:54.814454  557955 cri.go:89] found id: ""
	I1210 07:22:54.814479  557955 logs.go:282] 0 containers: []
	W1210 07:22:54.814488  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:22:54.814494  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:22:54.814554  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:22:54.846493  557955 cri.go:89] found id: ""
	I1210 07:22:54.846548  557955 logs.go:282] 0 containers: []
	W1210 07:22:54.846557  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:22:54.846564  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:22:54.846631  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:22:54.877067  557955 cri.go:89] found id: ""
	I1210 07:22:54.877096  557955 logs.go:282] 0 containers: []
	W1210 07:22:54.877106  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:22:54.877116  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:22:54.877128  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:22:54.959013  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:22:54.959039  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:22:54.959053  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:22:55.001201  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:22:55.001237  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:22:55.043144  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:22:55.043177  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:22:55.118712  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:22:55.118761  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:22:57.637368  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:22:57.650109  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:22:57.650175  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:22:57.680516  557955 cri.go:89] found id: ""
	I1210 07:22:57.680537  557955 logs.go:282] 0 containers: []
	W1210 07:22:57.680546  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:22:57.680553  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:22:57.680611  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:22:57.707273  557955 cri.go:89] found id: ""
	I1210 07:22:57.707298  557955 logs.go:282] 0 containers: []
	W1210 07:22:57.707308  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:22:57.707314  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:22:57.707371  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:22:57.733646  557955 cri.go:89] found id: ""
	I1210 07:22:57.733672  557955 logs.go:282] 0 containers: []
	W1210 07:22:57.733681  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:22:57.733687  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:22:57.733746  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:22:57.760182  557955 cri.go:89] found id: ""
	I1210 07:22:57.760203  557955 logs.go:282] 0 containers: []
	W1210 07:22:57.760213  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:22:57.760219  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:22:57.760277  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:22:57.786714  557955 cri.go:89] found id: ""
	I1210 07:22:57.786740  557955 logs.go:282] 0 containers: []
	W1210 07:22:57.786749  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:22:57.786756  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:22:57.786813  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:22:57.813996  557955 cri.go:89] found id: ""
	I1210 07:22:57.814020  557955 logs.go:282] 0 containers: []
	W1210 07:22:57.814029  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:22:57.814035  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:22:57.814094  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:22:57.840817  557955 cri.go:89] found id: ""
	I1210 07:22:57.840844  557955 logs.go:282] 0 containers: []
	W1210 07:22:57.840853  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:22:57.840860  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:22:57.840921  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:22:57.867690  557955 cri.go:89] found id: ""
	I1210 07:22:57.867714  557955 logs.go:282] 0 containers: []
	W1210 07:22:57.867723  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:22:57.867732  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:22:57.867762  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:22:57.939657  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:22:57.939679  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:22:57.939691  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:22:57.982548  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:22:57.982584  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:22:58.016586  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:22:58.016622  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:22:58.083988  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:22:58.084027  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:00.603133  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:00.637806  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:00.637880  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:00.695354  557955 cri.go:89] found id: ""
	I1210 07:23:00.695376  557955 logs.go:282] 0 containers: []
	W1210 07:23:00.695385  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:00.695391  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:00.695451  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:00.733389  557955 cri.go:89] found id: ""
	I1210 07:23:00.733416  557955 logs.go:282] 0 containers: []
	W1210 07:23:00.733428  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:00.733435  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:00.733493  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:00.785992  557955 cri.go:89] found id: ""
	I1210 07:23:00.786017  557955 logs.go:282] 0 containers: []
	W1210 07:23:00.786026  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:00.786032  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:00.786092  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:00.819586  557955 cri.go:89] found id: ""
	I1210 07:23:00.819612  557955 logs.go:282] 0 containers: []
	W1210 07:23:00.819621  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:00.819627  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:00.819684  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:00.855622  557955 cri.go:89] found id: ""
	I1210 07:23:00.855648  557955 logs.go:282] 0 containers: []
	W1210 07:23:00.855657  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:00.855663  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:00.855722  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:00.888914  557955 cri.go:89] found id: ""
	I1210 07:23:00.888940  557955 logs.go:282] 0 containers: []
	W1210 07:23:00.888949  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:00.888956  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:00.889014  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:00.930798  557955 cri.go:89] found id: ""
	I1210 07:23:00.930823  557955 logs.go:282] 0 containers: []
	W1210 07:23:00.930831  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:00.930837  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:00.930894  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:00.957721  557955 cri.go:89] found id: ""
	I1210 07:23:00.957748  557955 logs.go:282] 0 containers: []
	W1210 07:23:00.957757  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:00.957768  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:00.957778  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:00.999931  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:00.999966  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:01.032993  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:01.033025  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:01.103203  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:01.103244  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:01.121136  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:01.121330  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:01.190799  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:03.690997  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:03.706517  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:03.706585  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:03.744747  557955 cri.go:89] found id: ""
	I1210 07:23:03.744778  557955 logs.go:282] 0 containers: []
	W1210 07:23:03.744787  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:03.744794  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:03.744856  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:03.786838  557955 cri.go:89] found id: ""
	I1210 07:23:03.786860  557955 logs.go:282] 0 containers: []
	W1210 07:23:03.786869  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:03.786875  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:03.786932  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:03.819767  557955 cri.go:89] found id: ""
	I1210 07:23:03.819789  557955 logs.go:282] 0 containers: []
	W1210 07:23:03.819799  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:03.819806  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:03.819867  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:03.856919  557955 cri.go:89] found id: ""
	I1210 07:23:03.857003  557955 logs.go:282] 0 containers: []
	W1210 07:23:03.857029  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:03.857050  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:03.857178  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:03.906776  557955 cri.go:89] found id: ""
	I1210 07:23:03.906798  557955 logs.go:282] 0 containers: []
	W1210 07:23:03.906806  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:03.906812  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:03.906874  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:03.962106  557955 cri.go:89] found id: ""
	I1210 07:23:03.962128  557955 logs.go:282] 0 containers: []
	W1210 07:23:03.962136  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:03.962143  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:03.962252  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:04.010182  557955 cri.go:89] found id: ""
	I1210 07:23:04.010207  557955 logs.go:282] 0 containers: []
	W1210 07:23:04.010217  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:04.010224  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:04.010291  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:04.052976  557955 cri.go:89] found id: ""
	I1210 07:23:04.052999  557955 logs.go:282] 0 containers: []
	W1210 07:23:04.053007  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:04.053017  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:04.053028  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:04.147110  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:04.147199  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:04.168543  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:04.168621  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:04.284723  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:04.284740  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:04.284752  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:04.363351  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:04.363435  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:06.901311  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:06.913123  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:06.913271  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:06.938407  557955 cri.go:89] found id: ""
	I1210 07:23:06.938436  557955 logs.go:282] 0 containers: []
	W1210 07:23:06.938446  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:06.938453  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:06.938510  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:06.965640  557955 cri.go:89] found id: ""
	I1210 07:23:06.965668  557955 logs.go:282] 0 containers: []
	W1210 07:23:06.965677  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:06.965683  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:06.965741  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:06.992770  557955 cri.go:89] found id: ""
	I1210 07:23:06.992822  557955 logs.go:282] 0 containers: []
	W1210 07:23:06.992831  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:06.992837  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:06.992901  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:07.021609  557955 cri.go:89] found id: ""
	I1210 07:23:07.021636  557955 logs.go:282] 0 containers: []
	W1210 07:23:07.021644  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:07.021651  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:07.021711  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:07.048300  557955 cri.go:89] found id: ""
	I1210 07:23:07.048367  557955 logs.go:282] 0 containers: []
	W1210 07:23:07.048392  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:07.048412  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:07.048499  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:07.079641  557955 cri.go:89] found id: ""
	I1210 07:23:07.079673  557955 logs.go:282] 0 containers: []
	W1210 07:23:07.079683  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:07.079692  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:07.079759  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:07.108287  557955 cri.go:89] found id: ""
	I1210 07:23:07.108322  557955 logs.go:282] 0 containers: []
	W1210 07:23:07.108331  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:07.108337  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:07.108403  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:07.135067  557955 cri.go:89] found id: ""
	I1210 07:23:07.135139  557955 logs.go:282] 0 containers: []
	W1210 07:23:07.135154  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:07.135166  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:07.135178  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:07.176379  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:07.176415  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:07.206003  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:07.206040  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:07.276225  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:07.276266  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:07.292415  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:07.292443  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:07.355989  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:09.857351  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:09.868685  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:09.868755  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:09.893909  557955 cri.go:89] found id: ""
	I1210 07:23:09.893936  557955 logs.go:282] 0 containers: []
	W1210 07:23:09.893945  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:09.893952  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:09.894054  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:09.922468  557955 cri.go:89] found id: ""
	I1210 07:23:09.922493  557955 logs.go:282] 0 containers: []
	W1210 07:23:09.922502  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:09.922509  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:09.922588  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:09.948934  557955 cri.go:89] found id: ""
	I1210 07:23:09.948971  557955 logs.go:282] 0 containers: []
	W1210 07:23:09.948982  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:09.948989  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:09.949049  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:09.980297  557955 cri.go:89] found id: ""
	I1210 07:23:09.980322  557955 logs.go:282] 0 containers: []
	W1210 07:23:09.980330  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:09.980338  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:09.980397  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:10.023327  557955 cri.go:89] found id: ""
	I1210 07:23:10.023356  557955 logs.go:282] 0 containers: []
	W1210 07:23:10.023366  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:10.023374  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:10.023438  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:10.060277  557955 cri.go:89] found id: ""
	I1210 07:23:10.060312  557955 logs.go:282] 0 containers: []
	W1210 07:23:10.060323  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:10.060332  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:10.060405  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:10.088365  557955 cri.go:89] found id: ""
	I1210 07:23:10.088424  557955 logs.go:282] 0 containers: []
	W1210 07:23:10.088435  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:10.088443  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:10.088557  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:10.116400  557955 cri.go:89] found id: ""
	I1210 07:23:10.116433  557955 logs.go:282] 0 containers: []
	W1210 07:23:10.116446  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:10.116457  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:10.116472  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:10.185144  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:10.185179  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:10.202820  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:10.202854  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:10.276822  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:10.276847  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:10.276860  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:10.319353  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:10.319392  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:12.854065  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:12.866075  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:12.866141  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:12.896258  557955 cri.go:89] found id: ""
	I1210 07:23:12.896285  557955 logs.go:282] 0 containers: []
	W1210 07:23:12.896293  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:12.896299  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:12.896356  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:12.922378  557955 cri.go:89] found id: ""
	I1210 07:23:12.922401  557955 logs.go:282] 0 containers: []
	W1210 07:23:12.922410  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:12.922416  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:12.922473  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:12.949415  557955 cri.go:89] found id: ""
	I1210 07:23:12.949438  557955 logs.go:282] 0 containers: []
	W1210 07:23:12.949447  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:12.949453  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:12.949510  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:12.975862  557955 cri.go:89] found id: ""
	I1210 07:23:12.975927  557955 logs.go:282] 0 containers: []
	W1210 07:23:12.975952  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:12.975972  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:12.976050  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:13.005119  557955 cri.go:89] found id: ""
	I1210 07:23:13.005251  557955 logs.go:282] 0 containers: []
	W1210 07:23:13.005281  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:13.005301  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:13.005425  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:13.040663  557955 cri.go:89] found id: ""
	I1210 07:23:13.040734  557955 logs.go:282] 0 containers: []
	W1210 07:23:13.040758  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:13.040778  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:13.040896  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:13.071179  557955 cri.go:89] found id: ""
	I1210 07:23:13.071204  557955 logs.go:282] 0 containers: []
	W1210 07:23:13.071213  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:13.071219  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:13.071297  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:13.100488  557955 cri.go:89] found id: ""
	I1210 07:23:13.100514  557955 logs.go:282] 0 containers: []
	W1210 07:23:13.100523  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:13.100549  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:13.100564  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:13.168371  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:13.168409  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:13.185656  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:13.185687  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:13.248506  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:13.248571  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:13.248594  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:13.289938  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:13.289974  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:15.825384  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:15.842333  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:15.842403  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:15.869042  557955 cri.go:89] found id: ""
	I1210 07:23:15.869072  557955 logs.go:282] 0 containers: []
	W1210 07:23:15.869082  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:15.869090  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:15.869150  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:15.900625  557955 cri.go:89] found id: ""
	I1210 07:23:15.900652  557955 logs.go:282] 0 containers: []
	W1210 07:23:15.900662  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:15.900668  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:15.900729  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:15.928382  557955 cri.go:89] found id: ""
	I1210 07:23:15.928409  557955 logs.go:282] 0 containers: []
	W1210 07:23:15.928418  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:15.928424  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:15.928485  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:15.959085  557955 cri.go:89] found id: ""
	I1210 07:23:15.959111  557955 logs.go:282] 0 containers: []
	W1210 07:23:15.959121  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:15.959128  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:15.959187  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:15.987934  557955 cri.go:89] found id: ""
	I1210 07:23:15.987961  557955 logs.go:282] 0 containers: []
	W1210 07:23:15.987970  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:15.987976  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:15.988083  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:16.019446  557955 cri.go:89] found id: ""
	I1210 07:23:16.019518  557955 logs.go:282] 0 containers: []
	W1210 07:23:16.019544  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:16.019557  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:16.019650  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:16.047368  557955 cri.go:89] found id: ""
	I1210 07:23:16.047395  557955 logs.go:282] 0 containers: []
	W1210 07:23:16.047404  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:16.047410  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:16.047471  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:16.074102  557955 cri.go:89] found id: ""
	I1210 07:23:16.074126  557955 logs.go:282] 0 containers: []
	W1210 07:23:16.074140  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:16.074149  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:16.074160  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:16.142311  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:16.142348  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:16.160271  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:16.160304  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:16.229109  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:16.229128  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:16.229141  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:16.270146  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:16.270185  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:18.803388  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:18.814865  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:18.814935  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:18.847259  557955 cri.go:89] found id: ""
	I1210 07:23:18.847288  557955 logs.go:282] 0 containers: []
	W1210 07:23:18.847298  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:18.847305  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:18.847368  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:18.898237  557955 cri.go:89] found id: ""
	I1210 07:23:18.898259  557955 logs.go:282] 0 containers: []
	W1210 07:23:18.898267  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:18.898273  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:18.898329  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:18.938340  557955 cri.go:89] found id: ""
	I1210 07:23:18.938361  557955 logs.go:282] 0 containers: []
	W1210 07:23:18.938370  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:18.938376  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:18.938434  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:18.975627  557955 cri.go:89] found id: ""
	I1210 07:23:18.975649  557955 logs.go:282] 0 containers: []
	W1210 07:23:18.975658  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:18.975664  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:18.975720  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:19.006130  557955 cri.go:89] found id: ""
	I1210 07:23:19.006156  557955 logs.go:282] 0 containers: []
	W1210 07:23:19.006166  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:19.006172  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:19.006248  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:19.041846  557955 cri.go:89] found id: ""
	I1210 07:23:19.041927  557955 logs.go:282] 0 containers: []
	W1210 07:23:19.041952  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:19.041971  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:19.042063  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:19.072712  557955 cri.go:89] found id: ""
	I1210 07:23:19.072735  557955 logs.go:282] 0 containers: []
	W1210 07:23:19.072743  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:19.072750  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:19.072824  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:19.101256  557955 cri.go:89] found id: ""
	I1210 07:23:19.101280  557955 logs.go:282] 0 containers: []
	W1210 07:23:19.101288  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:19.101297  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:19.101310  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:19.120121  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:19.120199  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:19.220449  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:19.220512  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:19.220540  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:19.266963  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:19.267048  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:19.302177  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:19.302247  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:21.884898  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:21.896643  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:21.896713  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:21.925584  557955 cri.go:89] found id: ""
	I1210 07:23:21.925609  557955 logs.go:282] 0 containers: []
	W1210 07:23:21.925619  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:21.925625  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:21.925685  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:21.956869  557955 cri.go:89] found id: ""
	I1210 07:23:21.956895  557955 logs.go:282] 0 containers: []
	W1210 07:23:21.956905  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:21.956911  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:21.956973  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:21.986747  557955 cri.go:89] found id: ""
	I1210 07:23:21.986771  557955 logs.go:282] 0 containers: []
	W1210 07:23:21.986780  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:21.986786  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:21.986867  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:22.039253  557955 cri.go:89] found id: ""
	I1210 07:23:22.039283  557955 logs.go:282] 0 containers: []
	W1210 07:23:22.039293  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:22.039299  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:22.039399  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:22.097793  557955 cri.go:89] found id: ""
	I1210 07:23:22.097828  557955 logs.go:282] 0 containers: []
	W1210 07:23:22.097841  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:22.097850  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:22.097939  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:22.133631  557955 cri.go:89] found id: ""
	I1210 07:23:22.133655  557955 logs.go:282] 0 containers: []
	W1210 07:23:22.133665  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:22.133672  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:22.133737  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:22.176927  557955 cri.go:89] found id: ""
	I1210 07:23:22.176954  557955 logs.go:282] 0 containers: []
	W1210 07:23:22.176964  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:22.176971  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:22.177039  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:22.208647  557955 cri.go:89] found id: ""
	I1210 07:23:22.208720  557955 logs.go:282] 0 containers: []
	W1210 07:23:22.208760  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:22.208783  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:22.208834  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:22.288593  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:22.288696  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:22.305818  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:22.305847  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:22.446133  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:22.446154  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:22.446167  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:22.502225  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:22.502334  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:25.042560  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:25.055279  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:25.055368  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:25.093128  557955 cri.go:89] found id: ""
	I1210 07:23:25.093157  557955 logs.go:282] 0 containers: []
	W1210 07:23:25.093166  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:25.093174  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:25.093369  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:25.125666  557955 cri.go:89] found id: ""
	I1210 07:23:25.125693  557955 logs.go:282] 0 containers: []
	W1210 07:23:25.125702  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:25.125708  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:25.125768  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:25.152995  557955 cri.go:89] found id: ""
	I1210 07:23:25.153025  557955 logs.go:282] 0 containers: []
	W1210 07:23:25.153035  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:25.153042  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:25.153112  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:25.182449  557955 cri.go:89] found id: ""
	I1210 07:23:25.182474  557955 logs.go:282] 0 containers: []
	W1210 07:23:25.182483  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:25.182490  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:25.182552  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:25.212843  557955 cri.go:89] found id: ""
	I1210 07:23:25.212910  557955 logs.go:282] 0 containers: []
	W1210 07:23:25.212934  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:25.212962  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:25.213049  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:25.239830  557955 cri.go:89] found id: ""
	I1210 07:23:25.239855  557955 logs.go:282] 0 containers: []
	W1210 07:23:25.239864  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:25.239871  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:25.239930  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:25.267102  557955 cri.go:89] found id: ""
	I1210 07:23:25.267136  557955 logs.go:282] 0 containers: []
	W1210 07:23:25.267146  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:25.267152  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:25.267212  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:25.294145  557955 cri.go:89] found id: ""
	I1210 07:23:25.294219  557955 logs.go:282] 0 containers: []
	W1210 07:23:25.294231  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:25.294241  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:25.294252  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:25.371957  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:25.372001  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:25.392461  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:25.392505  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:25.472698  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:25.472721  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:25.472736  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:25.516267  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:25.516304  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:28.048725  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:28.060827  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:28.060900  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:28.095378  557955 cri.go:89] found id: ""
	I1210 07:23:28.095407  557955 logs.go:282] 0 containers: []
	W1210 07:23:28.095417  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:28.095423  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:28.095484  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:28.123565  557955 cri.go:89] found id: ""
	I1210 07:23:28.123594  557955 logs.go:282] 0 containers: []
	W1210 07:23:28.123603  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:28.123609  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:28.123668  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:28.151535  557955 cri.go:89] found id: ""
	I1210 07:23:28.151561  557955 logs.go:282] 0 containers: []
	W1210 07:23:28.151570  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:28.151576  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:28.151639  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:28.179743  557955 cri.go:89] found id: ""
	I1210 07:23:28.179772  557955 logs.go:282] 0 containers: []
	W1210 07:23:28.179782  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:28.179789  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:28.179850  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:28.206650  557955 cri.go:89] found id: ""
	I1210 07:23:28.206674  557955 logs.go:282] 0 containers: []
	W1210 07:23:28.206683  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:28.206690  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:28.206755  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:28.232963  557955 cri.go:89] found id: ""
	I1210 07:23:28.232990  557955 logs.go:282] 0 containers: []
	W1210 07:23:28.232999  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:28.233006  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:28.233064  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:28.258866  557955 cri.go:89] found id: ""
	I1210 07:23:28.258890  557955 logs.go:282] 0 containers: []
	W1210 07:23:28.258899  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:28.258906  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:28.258969  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:28.289801  557955 cri.go:89] found id: ""
	I1210 07:23:28.289827  557955 logs.go:282] 0 containers: []
	W1210 07:23:28.289836  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:28.289845  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:28.289857  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:28.358472  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:28.358511  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:28.376330  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:28.376356  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:28.457206  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:28.457254  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:28.457294  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:28.499266  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:28.499305  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:31.028733  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:31.040722  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:31.040793  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:31.069274  557955 cri.go:89] found id: ""
	I1210 07:23:31.069305  557955 logs.go:282] 0 containers: []
	W1210 07:23:31.069315  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:31.069328  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:31.069389  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:31.096736  557955 cri.go:89] found id: ""
	I1210 07:23:31.096761  557955 logs.go:282] 0 containers: []
	W1210 07:23:31.096771  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:31.096779  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:31.096846  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:31.128445  557955 cri.go:89] found id: ""
	I1210 07:23:31.128472  557955 logs.go:282] 0 containers: []
	W1210 07:23:31.128481  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:31.128487  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:31.128551  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:31.160551  557955 cri.go:89] found id: ""
	I1210 07:23:31.160578  557955 logs.go:282] 0 containers: []
	W1210 07:23:31.160587  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:31.160594  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:31.160653  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:31.192935  557955 cri.go:89] found id: ""
	I1210 07:23:31.192959  557955 logs.go:282] 0 containers: []
	W1210 07:23:31.192969  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:31.192975  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:31.193034  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:31.220433  557955 cri.go:89] found id: ""
	I1210 07:23:31.220456  557955 logs.go:282] 0 containers: []
	W1210 07:23:31.220464  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:31.220471  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:31.220531  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:31.249978  557955 cri.go:89] found id: ""
	I1210 07:23:31.250053  557955 logs.go:282] 0 containers: []
	W1210 07:23:31.250071  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:31.250078  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:31.250153  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:31.277796  557955 cri.go:89] found id: ""
	I1210 07:23:31.277822  557955 logs.go:282] 0 containers: []
	W1210 07:23:31.277842  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:31.277868  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:31.277885  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:31.345547  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:31.345585  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:31.362327  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:31.362358  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:31.455262  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:31.455283  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:31.455295  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:31.496015  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:31.496053  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:34.027292  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:34.039609  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:34.039682  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:34.067670  557955 cri.go:89] found id: ""
	I1210 07:23:34.067699  557955 logs.go:282] 0 containers: []
	W1210 07:23:34.067709  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:34.067716  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:34.067776  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:34.098728  557955 cri.go:89] found id: ""
	I1210 07:23:34.098756  557955 logs.go:282] 0 containers: []
	W1210 07:23:34.098766  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:34.098772  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:34.098833  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:34.124072  557955 cri.go:89] found id: ""
	I1210 07:23:34.124100  557955 logs.go:282] 0 containers: []
	W1210 07:23:34.124109  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:34.124116  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:34.124175  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:34.155421  557955 cri.go:89] found id: ""
	I1210 07:23:34.155444  557955 logs.go:282] 0 containers: []
	W1210 07:23:34.155453  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:34.155460  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:34.155523  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:34.184264  557955 cri.go:89] found id: ""
	I1210 07:23:34.184292  557955 logs.go:282] 0 containers: []
	W1210 07:23:34.184302  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:34.184308  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:34.184365  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:34.210926  557955 cri.go:89] found id: ""
	I1210 07:23:34.210957  557955 logs.go:282] 0 containers: []
	W1210 07:23:34.210967  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:34.210973  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:34.211030  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:34.243254  557955 cri.go:89] found id: ""
	I1210 07:23:34.243283  557955 logs.go:282] 0 containers: []
	W1210 07:23:34.243292  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:34.243298  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:34.243354  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:34.269905  557955 cri.go:89] found id: ""
	I1210 07:23:34.269931  557955 logs.go:282] 0 containers: []
	W1210 07:23:34.269941  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:34.269950  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:34.269965  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:34.310415  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:34.310449  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:34.343866  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:34.343905  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:34.413386  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:34.413425  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:34.431150  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:34.431179  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:34.502506  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:37.009071  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:37.022751  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:37.022840  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:37.052043  557955 cri.go:89] found id: ""
	I1210 07:23:37.052070  557955 logs.go:282] 0 containers: []
	W1210 07:23:37.052079  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:37.052085  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:37.052148  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:37.077513  557955 cri.go:89] found id: ""
	I1210 07:23:37.077538  557955 logs.go:282] 0 containers: []
	W1210 07:23:37.077547  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:37.077553  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:37.077611  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:37.120347  557955 cri.go:89] found id: ""
	I1210 07:23:37.120373  557955 logs.go:282] 0 containers: []
	W1210 07:23:37.120382  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:37.120388  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:37.120496  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:37.150588  557955 cri.go:89] found id: ""
	I1210 07:23:37.150621  557955 logs.go:282] 0 containers: []
	W1210 07:23:37.150630  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:37.150637  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:37.150719  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:37.181439  557955 cri.go:89] found id: ""
	I1210 07:23:37.181464  557955 logs.go:282] 0 containers: []
	W1210 07:23:37.181473  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:37.181480  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:37.181538  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:37.208816  557955 cri.go:89] found id: ""
	I1210 07:23:37.208849  557955 logs.go:282] 0 containers: []
	W1210 07:23:37.208859  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:37.208866  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:37.208936  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:37.237118  557955 cri.go:89] found id: ""
	I1210 07:23:37.237145  557955 logs.go:282] 0 containers: []
	W1210 07:23:37.237154  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:37.237161  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:37.237254  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:37.263678  557955 cri.go:89] found id: ""
	I1210 07:23:37.263704  557955 logs.go:282] 0 containers: []
	W1210 07:23:37.263714  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:37.263722  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:37.263736  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:37.332684  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:37.332726  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:37.349911  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:37.349941  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:37.433900  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:37.433921  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:37.433932  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:37.477973  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:37.478009  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:40.008094  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:40.040161  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:40.040270  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:40.071871  557955 cri.go:89] found id: ""
	I1210 07:23:40.071897  557955 logs.go:282] 0 containers: []
	W1210 07:23:40.071907  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:40.071913  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:40.071974  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:40.101132  557955 cri.go:89] found id: ""
	I1210 07:23:40.101160  557955 logs.go:282] 0 containers: []
	W1210 07:23:40.101169  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:40.101175  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:40.101356  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:40.129557  557955 cri.go:89] found id: ""
	I1210 07:23:40.129586  557955 logs.go:282] 0 containers: []
	W1210 07:23:40.129595  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:40.129605  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:40.129668  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:40.161920  557955 cri.go:89] found id: ""
	I1210 07:23:40.161944  557955 logs.go:282] 0 containers: []
	W1210 07:23:40.161953  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:40.161960  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:40.162070  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:40.190099  557955 cri.go:89] found id: ""
	I1210 07:23:40.190123  557955 logs.go:282] 0 containers: []
	W1210 07:23:40.190132  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:40.190140  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:40.190200  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:40.216944  557955 cri.go:89] found id: ""
	I1210 07:23:40.216968  557955 logs.go:282] 0 containers: []
	W1210 07:23:40.216977  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:40.216984  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:40.217052  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:40.244187  557955 cri.go:89] found id: ""
	I1210 07:23:40.244211  557955 logs.go:282] 0 containers: []
	W1210 07:23:40.244220  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:40.244258  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:40.244345  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:40.271221  557955 cri.go:89] found id: ""
	I1210 07:23:40.271244  557955 logs.go:282] 0 containers: []
	W1210 07:23:40.271253  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:40.271262  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:40.271274  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:40.312082  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:40.312121  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:40.341052  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:40.341083  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:40.411446  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:40.411486  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:40.428750  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:40.428779  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:40.497115  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:42.997864  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:43.016139  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:43.016207  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:43.045618  557955 cri.go:89] found id: ""
	I1210 07:23:43.045643  557955 logs.go:282] 0 containers: []
	W1210 07:23:43.045652  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:43.045660  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:43.045717  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:43.084092  557955 cri.go:89] found id: ""
	I1210 07:23:43.084114  557955 logs.go:282] 0 containers: []
	W1210 07:23:43.084123  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:43.084129  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:43.084196  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:43.117308  557955 cri.go:89] found id: ""
	I1210 07:23:43.117333  557955 logs.go:282] 0 containers: []
	W1210 07:23:43.117343  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:43.117350  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:43.117410  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:43.142566  557955 cri.go:89] found id: ""
	I1210 07:23:43.142591  557955 logs.go:282] 0 containers: []
	W1210 07:23:43.142600  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:43.142607  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:43.142665  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:43.172259  557955 cri.go:89] found id: ""
	I1210 07:23:43.172288  557955 logs.go:282] 0 containers: []
	W1210 07:23:43.172297  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:43.172303  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:43.172364  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:43.197299  557955 cri.go:89] found id: ""
	I1210 07:23:43.197324  557955 logs.go:282] 0 containers: []
	W1210 07:23:43.197332  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:43.197338  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:43.197397  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:43.222447  557955 cri.go:89] found id: ""
	I1210 07:23:43.222473  557955 logs.go:282] 0 containers: []
	W1210 07:23:43.222489  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:43.222495  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:43.222580  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:43.250250  557955 cri.go:89] found id: ""
	I1210 07:23:43.250274  557955 logs.go:282] 0 containers: []
	W1210 07:23:43.250283  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:43.250291  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:43.250303  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:43.316772  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:43.316812  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:43.335240  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:43.335272  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:43.430460  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:43.430481  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:43.430495  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:43.474900  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:43.474935  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:46.004078  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:46.019662  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:46.019731  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:46.051467  557955 cri.go:89] found id: ""
	I1210 07:23:46.051490  557955 logs.go:282] 0 containers: []
	W1210 07:23:46.051500  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:46.051506  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:46.051563  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:46.086930  557955 cri.go:89] found id: ""
	I1210 07:23:46.086954  557955 logs.go:282] 0 containers: []
	W1210 07:23:46.086963  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:46.086969  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:46.087025  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:46.117023  557955 cri.go:89] found id: ""
	I1210 07:23:46.117047  557955 logs.go:282] 0 containers: []
	W1210 07:23:46.117055  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:46.117062  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:46.117124  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:46.155721  557955 cri.go:89] found id: ""
	I1210 07:23:46.155749  557955 logs.go:282] 0 containers: []
	W1210 07:23:46.155759  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:46.155766  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:46.155835  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:46.194237  557955 cri.go:89] found id: ""
	I1210 07:23:46.194264  557955 logs.go:282] 0 containers: []
	W1210 07:23:46.194273  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:46.194284  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:46.194363  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:46.231970  557955 cri.go:89] found id: ""
	I1210 07:23:46.231992  557955 logs.go:282] 0 containers: []
	W1210 07:23:46.232001  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:46.232008  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:46.232066  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:46.279601  557955 cri.go:89] found id: ""
	I1210 07:23:46.279627  557955 logs.go:282] 0 containers: []
	W1210 07:23:46.279636  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:46.279643  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:46.279703  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:46.320112  557955 cri.go:89] found id: ""
	I1210 07:23:46.320135  557955 logs.go:282] 0 containers: []
	W1210 07:23:46.320145  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:46.320153  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:46.320167  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:46.452913  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:46.452939  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:46.452953  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:46.504965  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:46.505006  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:46.540391  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:46.540469  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:46.607230  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:46.607270  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:49.125316  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:49.137662  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:49.137733  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:49.184361  557955 cri.go:89] found id: ""
	I1210 07:23:49.184389  557955 logs.go:282] 0 containers: []
	W1210 07:23:49.184398  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:49.184404  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:49.184468  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:49.221827  557955 cri.go:89] found id: ""
	I1210 07:23:49.221855  557955 logs.go:282] 0 containers: []
	W1210 07:23:49.221865  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:49.221871  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:49.221929  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:49.251255  557955 cri.go:89] found id: ""
	I1210 07:23:49.251284  557955 logs.go:282] 0 containers: []
	W1210 07:23:49.251292  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:49.251300  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:49.251355  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:49.283905  557955 cri.go:89] found id: ""
	I1210 07:23:49.283932  557955 logs.go:282] 0 containers: []
	W1210 07:23:49.283941  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:49.283947  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:49.284002  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:49.314044  557955 cri.go:89] found id: ""
	I1210 07:23:49.314072  557955 logs.go:282] 0 containers: []
	W1210 07:23:49.314081  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:49.314087  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:49.314174  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:49.343697  557955 cri.go:89] found id: ""
	I1210 07:23:49.343733  557955 logs.go:282] 0 containers: []
	W1210 07:23:49.343742  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:49.343749  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:49.343833  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:49.389677  557955 cri.go:89] found id: ""
	I1210 07:23:49.389710  557955 logs.go:282] 0 containers: []
	W1210 07:23:49.389719  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:49.389741  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:49.389823  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:49.439603  557955 cri.go:89] found id: ""
	I1210 07:23:49.439637  557955 logs.go:282] 0 containers: []
	W1210 07:23:49.439646  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:49.439671  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:49.439689  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:49.556930  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:49.556953  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:49.556965  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:49.605733  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:49.605772  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:49.661490  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:49.661521  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:49.743230  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:49.743274  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:52.265331  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:52.276961  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:52.277031  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:52.306956  557955 cri.go:89] found id: ""
	I1210 07:23:52.306979  557955 logs.go:282] 0 containers: []
	W1210 07:23:52.306994  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:52.307000  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:52.307062  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:52.333360  557955 cri.go:89] found id: ""
	I1210 07:23:52.333387  557955 logs.go:282] 0 containers: []
	W1210 07:23:52.333396  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:52.333403  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:52.333466  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:52.361731  557955 cri.go:89] found id: ""
	I1210 07:23:52.361755  557955 logs.go:282] 0 containers: []
	W1210 07:23:52.361764  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:52.361771  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:52.361834  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:52.394927  557955 cri.go:89] found id: ""
	I1210 07:23:52.394952  557955 logs.go:282] 0 containers: []
	W1210 07:23:52.394961  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:52.394967  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:52.395026  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:52.464243  557955 cri.go:89] found id: ""
	I1210 07:23:52.464268  557955 logs.go:282] 0 containers: []
	W1210 07:23:52.464278  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:52.464284  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:52.464342  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:52.499691  557955 cri.go:89] found id: ""
	I1210 07:23:52.499718  557955 logs.go:282] 0 containers: []
	W1210 07:23:52.499727  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:52.499734  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:52.499802  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:52.532211  557955 cri.go:89] found id: ""
	I1210 07:23:52.532239  557955 logs.go:282] 0 containers: []
	W1210 07:23:52.532249  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:52.532255  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:52.532317  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:52.573235  557955 cri.go:89] found id: ""
	I1210 07:23:52.573261  557955 logs.go:282] 0 containers: []
	W1210 07:23:52.573270  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:52.573280  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:52.573293  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:52.653152  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:52.653300  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:52.672319  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:52.672348  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:52.799514  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:52.799546  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:52.799560  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:52.851238  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:52.851276  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:55.390398  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:55.402814  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:55.402880  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:55.436042  557955 cri.go:89] found id: ""
	I1210 07:23:55.436065  557955 logs.go:282] 0 containers: []
	W1210 07:23:55.436075  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:55.436081  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:55.436139  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:55.467976  557955 cri.go:89] found id: ""
	I1210 07:23:55.468000  557955 logs.go:282] 0 containers: []
	W1210 07:23:55.468008  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:55.468015  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:55.468076  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:55.494585  557955 cri.go:89] found id: ""
	I1210 07:23:55.494610  557955 logs.go:282] 0 containers: []
	W1210 07:23:55.494619  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:55.494626  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:55.494685  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:55.521698  557955 cri.go:89] found id: ""
	I1210 07:23:55.521724  557955 logs.go:282] 0 containers: []
	W1210 07:23:55.521733  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:55.521739  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:55.521802  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:55.548690  557955 cri.go:89] found id: ""
	I1210 07:23:55.548717  557955 logs.go:282] 0 containers: []
	W1210 07:23:55.548726  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:55.548733  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:55.548794  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:55.575305  557955 cri.go:89] found id: ""
	I1210 07:23:55.575332  557955 logs.go:282] 0 containers: []
	W1210 07:23:55.575342  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:55.575350  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:55.575415  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:55.602385  557955 cri.go:89] found id: ""
	I1210 07:23:55.602412  557955 logs.go:282] 0 containers: []
	W1210 07:23:55.602421  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:55.602428  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:55.602491  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:55.630523  557955 cri.go:89] found id: ""
	I1210 07:23:55.630549  557955 logs.go:282] 0 containers: []
	W1210 07:23:55.630558  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:55.630567  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:55.630579  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:23:55.704930  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:55.704979  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:55.723005  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:55.723035  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:55.791830  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:55.791850  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:55.791863  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:55.832685  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:55.832720  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:58.371567  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:23:58.399184  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:23:58.399252  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:23:58.443777  557955 cri.go:89] found id: ""
	I1210 07:23:58.443800  557955 logs.go:282] 0 containers: []
	W1210 07:23:58.443809  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:23:58.443815  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:23:58.443886  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:23:58.499505  557955 cri.go:89] found id: ""
	I1210 07:23:58.499528  557955 logs.go:282] 0 containers: []
	W1210 07:23:58.499537  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:23:58.499543  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:23:58.499599  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:23:58.538543  557955 cri.go:89] found id: ""
	I1210 07:23:58.538565  557955 logs.go:282] 0 containers: []
	W1210 07:23:58.538574  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:23:58.538580  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:23:58.538638  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:23:58.567885  557955 cri.go:89] found id: ""
	I1210 07:23:58.567907  557955 logs.go:282] 0 containers: []
	W1210 07:23:58.567915  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:23:58.567922  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:23:58.567980  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:23:58.599527  557955 cri.go:89] found id: ""
	I1210 07:23:58.599549  557955 logs.go:282] 0 containers: []
	W1210 07:23:58.599559  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:23:58.599566  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:23:58.599625  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:23:58.639840  557955 cri.go:89] found id: ""
	I1210 07:23:58.639862  557955 logs.go:282] 0 containers: []
	W1210 07:23:58.639871  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:23:58.639877  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:23:58.639936  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:23:58.669623  557955 cri.go:89] found id: ""
	I1210 07:23:58.669645  557955 logs.go:282] 0 containers: []
	W1210 07:23:58.669653  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:23:58.669660  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:23:58.669716  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:23:58.704633  557955 cri.go:89] found id: ""
	I1210 07:23:58.704654  557955 logs.go:282] 0 containers: []
	W1210 07:23:58.704663  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:23:58.704672  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:23:58.704684  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:23:58.723520  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:23:58.723613  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:23:58.814419  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:23:58.814436  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:23:58.814448  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:23:58.868859  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:23:58.868945  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:23:58.914632  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:23:58.914708  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:01.484954  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:01.496821  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:01.496901  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:01.529113  557955 cri.go:89] found id: ""
	I1210 07:24:01.529136  557955 logs.go:282] 0 containers: []
	W1210 07:24:01.529146  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:01.529152  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:01.529251  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:01.555486  557955 cri.go:89] found id: ""
	I1210 07:24:01.555512  557955 logs.go:282] 0 containers: []
	W1210 07:24:01.555522  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:01.555528  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:01.555590  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:01.587269  557955 cri.go:89] found id: ""
	I1210 07:24:01.587293  557955 logs.go:282] 0 containers: []
	W1210 07:24:01.587301  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:01.587307  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:01.587367  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:01.615008  557955 cri.go:89] found id: ""
	I1210 07:24:01.615033  557955 logs.go:282] 0 containers: []
	W1210 07:24:01.615042  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:01.615049  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:01.615107  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:01.641874  557955 cri.go:89] found id: ""
	I1210 07:24:01.641903  557955 logs.go:282] 0 containers: []
	W1210 07:24:01.641912  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:01.641919  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:01.641984  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:01.669125  557955 cri.go:89] found id: ""
	I1210 07:24:01.669149  557955 logs.go:282] 0 containers: []
	W1210 07:24:01.669158  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:01.669165  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:01.669245  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:01.700435  557955 cri.go:89] found id: ""
	I1210 07:24:01.700462  557955 logs.go:282] 0 containers: []
	W1210 07:24:01.700471  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:01.700477  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:01.700540  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:01.731984  557955 cri.go:89] found id: ""
	I1210 07:24:01.732049  557955 logs.go:282] 0 containers: []
	W1210 07:24:01.732082  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:01.732105  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:01.732147  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:01.804051  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:01.804073  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:01.804085  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:01.851250  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:01.851288  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:01.886008  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:01.886037  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:01.953129  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:01.953165  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:04.471441  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:04.484205  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:04.484325  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:04.521166  557955 cri.go:89] found id: ""
	I1210 07:24:04.521264  557955 logs.go:282] 0 containers: []
	W1210 07:24:04.521289  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:04.521309  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:04.521422  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:04.561131  557955 cri.go:89] found id: ""
	I1210 07:24:04.561232  557955 logs.go:282] 0 containers: []
	W1210 07:24:04.561257  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:04.561276  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:04.561390  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:04.599569  557955 cri.go:89] found id: ""
	I1210 07:24:04.599643  557955 logs.go:282] 0 containers: []
	W1210 07:24:04.599667  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:04.599689  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:04.599797  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:04.640055  557955 cri.go:89] found id: ""
	I1210 07:24:04.640127  557955 logs.go:282] 0 containers: []
	W1210 07:24:04.640151  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:04.640171  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:04.640280  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:04.671585  557955 cri.go:89] found id: ""
	I1210 07:24:04.671658  557955 logs.go:282] 0 containers: []
	W1210 07:24:04.671681  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:04.671701  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:04.671810  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:04.703907  557955 cri.go:89] found id: ""
	I1210 07:24:04.703985  557955 logs.go:282] 0 containers: []
	W1210 07:24:04.704008  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:04.704026  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:04.704131  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:04.743728  557955 cri.go:89] found id: ""
	I1210 07:24:04.743804  557955 logs.go:282] 0 containers: []
	W1210 07:24:04.743826  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:04.743847  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:04.743963  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:04.787486  557955 cri.go:89] found id: ""
	I1210 07:24:04.787547  557955 logs.go:282] 0 containers: []
	W1210 07:24:04.787579  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:04.787602  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:04.787640  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:04.888317  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:04.888387  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:04.888413  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:04.936728  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:04.936809  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:04.972058  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:04.972089  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:05.046424  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:05.046467  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:07.565726  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:07.577418  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:07.577489  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:07.603566  557955 cri.go:89] found id: ""
	I1210 07:24:07.603603  557955 logs.go:282] 0 containers: []
	W1210 07:24:07.603613  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:07.603620  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:07.603716  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:07.630535  557955 cri.go:89] found id: ""
	I1210 07:24:07.630604  557955 logs.go:282] 0 containers: []
	W1210 07:24:07.630631  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:07.630650  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:07.630723  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:07.656844  557955 cri.go:89] found id: ""
	I1210 07:24:07.656882  557955 logs.go:282] 0 containers: []
	W1210 07:24:07.656892  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:07.656899  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:07.656958  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:07.684627  557955 cri.go:89] found id: ""
	I1210 07:24:07.684652  557955 logs.go:282] 0 containers: []
	W1210 07:24:07.684661  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:07.684668  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:07.684731  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:07.712730  557955 cri.go:89] found id: ""
	I1210 07:24:07.712765  557955 logs.go:282] 0 containers: []
	W1210 07:24:07.712775  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:07.712782  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:07.712843  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:07.739191  557955 cri.go:89] found id: ""
	I1210 07:24:07.739219  557955 logs.go:282] 0 containers: []
	W1210 07:24:07.739228  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:07.739235  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:07.739298  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:07.770813  557955 cri.go:89] found id: ""
	I1210 07:24:07.770885  557955 logs.go:282] 0 containers: []
	W1210 07:24:07.770901  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:07.770908  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:07.770996  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:07.798485  557955 cri.go:89] found id: ""
	I1210 07:24:07.798556  557955 logs.go:282] 0 containers: []
	W1210 07:24:07.798580  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:07.798596  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:07.798609  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:07.866333  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:07.866372  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:07.883519  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:07.883555  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:07.950154  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:07.950222  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:07.950262  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:07.991485  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:07.991521  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:10.527990  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:10.546718  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:10.546811  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:10.574328  557955 cri.go:89] found id: ""
	I1210 07:24:10.574359  557955 logs.go:282] 0 containers: []
	W1210 07:24:10.574368  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:10.574375  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:10.574435  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:10.604617  557955 cri.go:89] found id: ""
	I1210 07:24:10.604640  557955 logs.go:282] 0 containers: []
	W1210 07:24:10.604649  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:10.604659  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:10.604718  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:10.630599  557955 cri.go:89] found id: ""
	I1210 07:24:10.630667  557955 logs.go:282] 0 containers: []
	W1210 07:24:10.630695  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:10.630717  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:10.630780  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:10.657405  557955 cri.go:89] found id: ""
	I1210 07:24:10.657438  557955 logs.go:282] 0 containers: []
	W1210 07:24:10.657448  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:10.657470  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:10.657557  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:10.684161  557955 cri.go:89] found id: ""
	I1210 07:24:10.684190  557955 logs.go:282] 0 containers: []
	W1210 07:24:10.684199  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:10.684205  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:10.684264  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:10.712951  557955 cri.go:89] found id: ""
	I1210 07:24:10.712979  557955 logs.go:282] 0 containers: []
	W1210 07:24:10.712989  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:10.712995  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:10.713054  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:10.740548  557955 cri.go:89] found id: ""
	I1210 07:24:10.740576  557955 logs.go:282] 0 containers: []
	W1210 07:24:10.740586  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:10.740592  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:10.740654  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:10.767660  557955 cri.go:89] found id: ""
	I1210 07:24:10.767690  557955 logs.go:282] 0 containers: []
	W1210 07:24:10.767699  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:10.767707  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:10.767718  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:10.801668  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:10.801700  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:10.869758  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:10.869798  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:10.886420  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:10.886451  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:10.955378  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:10.955437  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:10.955463  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:13.498411  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:13.510041  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:13.510112  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:13.540451  557955 cri.go:89] found id: ""
	I1210 07:24:13.540477  557955 logs.go:282] 0 containers: []
	W1210 07:24:13.540486  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:13.540493  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:13.540552  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:13.568342  557955 cri.go:89] found id: ""
	I1210 07:24:13.568376  557955 logs.go:282] 0 containers: []
	W1210 07:24:13.568387  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:13.568396  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:13.568455  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:13.594554  557955 cri.go:89] found id: ""
	I1210 07:24:13.594579  557955 logs.go:282] 0 containers: []
	W1210 07:24:13.594588  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:13.594594  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:13.594674  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:13.619327  557955 cri.go:89] found id: ""
	I1210 07:24:13.619398  557955 logs.go:282] 0 containers: []
	W1210 07:24:13.619430  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:13.619450  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:13.619514  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:13.649483  557955 cri.go:89] found id: ""
	I1210 07:24:13.649507  557955 logs.go:282] 0 containers: []
	W1210 07:24:13.649516  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:13.649523  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:13.649591  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:13.676838  557955 cri.go:89] found id: ""
	I1210 07:24:13.676926  557955 logs.go:282] 0 containers: []
	W1210 07:24:13.676950  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:13.676971  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:13.677091  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:13.709017  557955 cri.go:89] found id: ""
	I1210 07:24:13.709088  557955 logs.go:282] 0 containers: []
	W1210 07:24:13.709111  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:13.709130  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:13.709261  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:13.740062  557955 cri.go:89] found id: ""
	I1210 07:24:13.740088  557955 logs.go:282] 0 containers: []
	W1210 07:24:13.740097  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:13.740106  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:13.740125  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:13.805857  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:13.805880  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:13.805891  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:13.848276  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:13.848311  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:13.880480  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:13.880510  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:13.952736  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:13.952775  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:16.470426  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:16.481790  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:16.481861  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:16.507374  557955 cri.go:89] found id: ""
	I1210 07:24:16.507403  557955 logs.go:282] 0 containers: []
	W1210 07:24:16.507412  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:16.507418  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:16.507480  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:16.532774  557955 cri.go:89] found id: ""
	I1210 07:24:16.532802  557955 logs.go:282] 0 containers: []
	W1210 07:24:16.532811  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:16.532817  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:16.532878  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:16.563424  557955 cri.go:89] found id: ""
	I1210 07:24:16.563448  557955 logs.go:282] 0 containers: []
	W1210 07:24:16.563456  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:16.563475  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:16.563553  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:16.589770  557955 cri.go:89] found id: ""
	I1210 07:24:16.589796  557955 logs.go:282] 0 containers: []
	W1210 07:24:16.589806  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:16.589813  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:16.589869  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:16.619482  557955 cri.go:89] found id: ""
	I1210 07:24:16.619511  557955 logs.go:282] 0 containers: []
	W1210 07:24:16.619521  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:16.619527  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:16.619589  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:16.647573  557955 cri.go:89] found id: ""
	I1210 07:24:16.647602  557955 logs.go:282] 0 containers: []
	W1210 07:24:16.647612  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:16.647619  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:16.647676  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:16.672874  557955 cri.go:89] found id: ""
	I1210 07:24:16.672906  557955 logs.go:282] 0 containers: []
	W1210 07:24:16.672915  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:16.672921  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:16.672978  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:16.699740  557955 cri.go:89] found id: ""
	I1210 07:24:16.699768  557955 logs.go:282] 0 containers: []
	W1210 07:24:16.699777  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:16.699786  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:16.699801  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:16.772927  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:16.772966  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:16.789698  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:16.789731  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:16.863307  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:16.863371  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:16.863397  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:16.903178  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:16.903215  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:19.433343  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:19.444768  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:19.444837  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:19.470524  557955 cri.go:89] found id: ""
	I1210 07:24:19.470548  557955 logs.go:282] 0 containers: []
	W1210 07:24:19.470557  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:19.470563  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:19.470620  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:19.495685  557955 cri.go:89] found id: ""
	I1210 07:24:19.495707  557955 logs.go:282] 0 containers: []
	W1210 07:24:19.495716  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:19.495727  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:19.495784  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:19.522516  557955 cri.go:89] found id: ""
	I1210 07:24:19.522541  557955 logs.go:282] 0 containers: []
	W1210 07:24:19.522549  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:19.522556  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:19.522613  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:19.552827  557955 cri.go:89] found id: ""
	I1210 07:24:19.552853  557955 logs.go:282] 0 containers: []
	W1210 07:24:19.552862  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:19.552868  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:19.552933  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:19.580537  557955 cri.go:89] found id: ""
	I1210 07:24:19.580567  557955 logs.go:282] 0 containers: []
	W1210 07:24:19.580577  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:19.580583  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:19.580650  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:19.619110  557955 cri.go:89] found id: ""
	I1210 07:24:19.619137  557955 logs.go:282] 0 containers: []
	W1210 07:24:19.619147  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:19.619153  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:19.619214  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:19.665118  557955 cri.go:89] found id: ""
	I1210 07:24:19.665146  557955 logs.go:282] 0 containers: []
	W1210 07:24:19.665155  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:19.665161  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:19.665239  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:19.706598  557955 cri.go:89] found id: ""
	I1210 07:24:19.706620  557955 logs.go:282] 0 containers: []
	W1210 07:24:19.706629  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:19.706638  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:19.706650  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:19.800388  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:19.800407  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:19.800420  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:19.850831  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:19.850871  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:19.892465  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:19.892497  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:19.973395  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:19.973433  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:22.496104  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:22.507781  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:22.507856  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:22.534407  557955 cri.go:89] found id: ""
	I1210 07:24:22.534436  557955 logs.go:282] 0 containers: []
	W1210 07:24:22.534445  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:22.534451  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:22.534513  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:22.562513  557955 cri.go:89] found id: ""
	I1210 07:24:22.562539  557955 logs.go:282] 0 containers: []
	W1210 07:24:22.562548  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:22.562554  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:22.562612  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:22.589561  557955 cri.go:89] found id: ""
	I1210 07:24:22.589588  557955 logs.go:282] 0 containers: []
	W1210 07:24:22.589597  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:22.589604  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:22.589663  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:22.616787  557955 cri.go:89] found id: ""
	I1210 07:24:22.616814  557955 logs.go:282] 0 containers: []
	W1210 07:24:22.616823  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:22.616829  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:22.616895  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:22.643545  557955 cri.go:89] found id: ""
	I1210 07:24:22.643568  557955 logs.go:282] 0 containers: []
	W1210 07:24:22.643577  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:22.643584  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:22.643646  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:22.670311  557955 cri.go:89] found id: ""
	I1210 07:24:22.670333  557955 logs.go:282] 0 containers: []
	W1210 07:24:22.670341  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:22.670349  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:22.670409  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:22.696765  557955 cri.go:89] found id: ""
	I1210 07:24:22.696788  557955 logs.go:282] 0 containers: []
	W1210 07:24:22.696797  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:22.696803  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:22.696861  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:22.722915  557955 cri.go:89] found id: ""
	I1210 07:24:22.722941  557955 logs.go:282] 0 containers: []
	W1210 07:24:22.722950  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:22.722959  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:22.722976  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:22.762124  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:22.762161  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:22.793317  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:22.793345  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:22.870092  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:22.870133  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:22.886938  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:22.886969  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:22.956004  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:25.456287  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:25.468285  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:25.468364  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:25.494692  557955 cri.go:89] found id: ""
	I1210 07:24:25.494717  557955 logs.go:282] 0 containers: []
	W1210 07:24:25.494725  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:25.494743  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:25.494804  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:25.525246  557955 cri.go:89] found id: ""
	I1210 07:24:25.525329  557955 logs.go:282] 0 containers: []
	W1210 07:24:25.525343  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:25.525350  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:25.525438  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:25.553398  557955 cri.go:89] found id: ""
	I1210 07:24:25.553421  557955 logs.go:282] 0 containers: []
	W1210 07:24:25.553430  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:25.553437  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:25.553503  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:25.579909  557955 cri.go:89] found id: ""
	I1210 07:24:25.579932  557955 logs.go:282] 0 containers: []
	W1210 07:24:25.579942  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:25.579948  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:25.580006  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:25.608697  557955 cri.go:89] found id: ""
	I1210 07:24:25.608727  557955 logs.go:282] 0 containers: []
	W1210 07:24:25.608736  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:25.608742  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:25.608803  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:25.635283  557955 cri.go:89] found id: ""
	I1210 07:24:25.635358  557955 logs.go:282] 0 containers: []
	W1210 07:24:25.635374  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:25.635382  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:25.635459  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:25.661630  557955 cri.go:89] found id: ""
	I1210 07:24:25.661657  557955 logs.go:282] 0 containers: []
	W1210 07:24:25.661666  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:25.661672  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:25.661733  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:25.691782  557955 cri.go:89] found id: ""
	I1210 07:24:25.691807  557955 logs.go:282] 0 containers: []
	W1210 07:24:25.691817  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:25.691826  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:25.691838  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:25.757489  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:25.757556  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:25.757574  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:25.797781  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:25.797818  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:25.827050  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:25.827076  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:25.902760  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:25.902842  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:28.421339  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:28.433156  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:28.433251  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:28.459616  557955 cri.go:89] found id: ""
	I1210 07:24:28.459643  557955 logs.go:282] 0 containers: []
	W1210 07:24:28.459652  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:28.459658  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:28.459715  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:28.487537  557955 cri.go:89] found id: ""
	I1210 07:24:28.487567  557955 logs.go:282] 0 containers: []
	W1210 07:24:28.487576  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:28.487582  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:28.487647  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:28.519802  557955 cri.go:89] found id: ""
	I1210 07:24:28.519826  557955 logs.go:282] 0 containers: []
	W1210 07:24:28.519834  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:28.519841  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:28.519898  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:28.546316  557955 cri.go:89] found id: ""
	I1210 07:24:28.546342  557955 logs.go:282] 0 containers: []
	W1210 07:24:28.546350  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:28.546357  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:28.546418  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:28.576515  557955 cri.go:89] found id: ""
	I1210 07:24:28.576541  557955 logs.go:282] 0 containers: []
	W1210 07:24:28.576550  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:28.576556  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:28.576620  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:28.603610  557955 cri.go:89] found id: ""
	I1210 07:24:28.603638  557955 logs.go:282] 0 containers: []
	W1210 07:24:28.603649  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:28.603655  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:28.603714  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:28.630435  557955 cri.go:89] found id: ""
	I1210 07:24:28.630459  557955 logs.go:282] 0 containers: []
	W1210 07:24:28.630468  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:28.630474  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:28.630530  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:28.657996  557955 cri.go:89] found id: ""
	I1210 07:24:28.658024  557955 logs.go:282] 0 containers: []
	W1210 07:24:28.658033  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:28.658042  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:28.658055  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:28.723616  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:28.723637  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:28.723651  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:28.763801  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:28.763837  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:28.794667  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:28.794693  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:28.876882  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:28.876934  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:31.394862  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:31.407975  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:31.408045  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:31.440214  557955 cri.go:89] found id: ""
	I1210 07:24:31.440240  557955 logs.go:282] 0 containers: []
	W1210 07:24:31.440249  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:31.440256  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:31.440316  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:31.467790  557955 cri.go:89] found id: ""
	I1210 07:24:31.467816  557955 logs.go:282] 0 containers: []
	W1210 07:24:31.467825  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:31.467832  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:31.467891  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:31.498766  557955 cri.go:89] found id: ""
	I1210 07:24:31.498794  557955 logs.go:282] 0 containers: []
	W1210 07:24:31.498803  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:31.498810  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:31.498874  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:31.526089  557955 cri.go:89] found id: ""
	I1210 07:24:31.526116  557955 logs.go:282] 0 containers: []
	W1210 07:24:31.526126  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:31.526133  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:31.526193  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:31.552378  557955 cri.go:89] found id: ""
	I1210 07:24:31.552406  557955 logs.go:282] 0 containers: []
	W1210 07:24:31.552416  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:31.552422  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:31.552480  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:31.577932  557955 cri.go:89] found id: ""
	I1210 07:24:31.577957  557955 logs.go:282] 0 containers: []
	W1210 07:24:31.577966  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:31.577972  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:31.578028  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:31.603283  557955 cri.go:89] found id: ""
	I1210 07:24:31.603309  557955 logs.go:282] 0 containers: []
	W1210 07:24:31.603318  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:31.603324  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:31.603387  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:31.629890  557955 cri.go:89] found id: ""
	I1210 07:24:31.629913  557955 logs.go:282] 0 containers: []
	W1210 07:24:31.629922  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:31.629932  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:31.629945  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:31.698207  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:31.698246  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:31.714729  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:31.714761  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:31.778999  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:31.779023  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:31.779035  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:31.820892  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:31.820936  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:34.357394  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:34.368635  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:34.368709  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:34.401538  557955 cri.go:89] found id: ""
	I1210 07:24:34.401574  557955 logs.go:282] 0 containers: []
	W1210 07:24:34.401584  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:34.401590  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:34.401674  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:34.446864  557955 cri.go:89] found id: ""
	I1210 07:24:34.446893  557955 logs.go:282] 0 containers: []
	W1210 07:24:34.446902  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:34.446914  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:34.446985  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:34.473740  557955 cri.go:89] found id: ""
	I1210 07:24:34.473764  557955 logs.go:282] 0 containers: []
	W1210 07:24:34.473773  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:34.473779  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:34.473837  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:34.499402  557955 cri.go:89] found id: ""
	I1210 07:24:34.499430  557955 logs.go:282] 0 containers: []
	W1210 07:24:34.499439  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:34.499446  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:34.499505  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:34.525113  557955 cri.go:89] found id: ""
	I1210 07:24:34.525140  557955 logs.go:282] 0 containers: []
	W1210 07:24:34.525149  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:34.525160  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:34.525263  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:34.552473  557955 cri.go:89] found id: ""
	I1210 07:24:34.552502  557955 logs.go:282] 0 containers: []
	W1210 07:24:34.552512  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:34.552518  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:34.552578  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:34.579926  557955 cri.go:89] found id: ""
	I1210 07:24:34.579953  557955 logs.go:282] 0 containers: []
	W1210 07:24:34.579962  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:34.579969  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:34.580075  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:34.606713  557955 cri.go:89] found id: ""
	I1210 07:24:34.606738  557955 logs.go:282] 0 containers: []
	W1210 07:24:34.606747  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:34.606757  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:34.606770  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:34.675837  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:34.675876  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:34.693548  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:34.693579  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:34.766848  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:34.766917  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:34.766937  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:34.808542  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:34.808588  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:37.346729  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:37.359010  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:37.359088  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:37.404964  557955 cri.go:89] found id: ""
	I1210 07:24:37.404992  557955 logs.go:282] 0 containers: []
	W1210 07:24:37.405001  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:37.405007  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:37.405065  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:37.459233  557955 cri.go:89] found id: ""
	I1210 07:24:37.459257  557955 logs.go:282] 0 containers: []
	W1210 07:24:37.459266  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:37.459272  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:37.459333  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:37.519424  557955 cri.go:89] found id: ""
	I1210 07:24:37.519448  557955 logs.go:282] 0 containers: []
	W1210 07:24:37.519457  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:37.519463  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:37.519525  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:37.559631  557955 cri.go:89] found id: ""
	I1210 07:24:37.559658  557955 logs.go:282] 0 containers: []
	W1210 07:24:37.559667  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:37.559673  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:37.559731  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:37.594002  557955 cri.go:89] found id: ""
	I1210 07:24:37.594031  557955 logs.go:282] 0 containers: []
	W1210 07:24:37.594041  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:37.594053  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:37.594115  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:37.621066  557955 cri.go:89] found id: ""
	I1210 07:24:37.621094  557955 logs.go:282] 0 containers: []
	W1210 07:24:37.621103  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:37.621110  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:37.621167  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:37.647559  557955 cri.go:89] found id: ""
	I1210 07:24:37.647582  557955 logs.go:282] 0 containers: []
	W1210 07:24:37.647592  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:37.647598  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:37.647660  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:37.674461  557955 cri.go:89] found id: ""
	I1210 07:24:37.674485  557955 logs.go:282] 0 containers: []
	W1210 07:24:37.674494  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:37.674503  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:37.674535  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:37.706961  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:37.706990  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:37.775632  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:37.775668  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:37.792672  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:37.792709  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:37.863576  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:37.863598  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:37.863611  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:40.405311  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:40.418796  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:40.418865  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:40.467666  557955 cri.go:89] found id: ""
	I1210 07:24:40.467690  557955 logs.go:282] 0 containers: []
	W1210 07:24:40.467700  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:40.467706  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:40.467771  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:40.511497  557955 cri.go:89] found id: ""
	I1210 07:24:40.511524  557955 logs.go:282] 0 containers: []
	W1210 07:24:40.511533  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:40.511539  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:40.511597  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:40.550630  557955 cri.go:89] found id: ""
	I1210 07:24:40.550654  557955 logs.go:282] 0 containers: []
	W1210 07:24:40.550662  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:40.550669  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:40.550726  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:40.587037  557955 cri.go:89] found id: ""
	I1210 07:24:40.587058  557955 logs.go:282] 0 containers: []
	W1210 07:24:40.587068  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:40.587074  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:40.587130  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:40.632847  557955 cri.go:89] found id: ""
	I1210 07:24:40.632870  557955 logs.go:282] 0 containers: []
	W1210 07:24:40.632880  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:40.632886  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:40.632957  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:40.681050  557955 cri.go:89] found id: ""
	I1210 07:24:40.681073  557955 logs.go:282] 0 containers: []
	W1210 07:24:40.681083  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:40.681089  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:40.681148  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:40.715289  557955 cri.go:89] found id: ""
	I1210 07:24:40.715312  557955 logs.go:282] 0 containers: []
	W1210 07:24:40.715321  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:40.715328  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:40.715387  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:40.762216  557955 cri.go:89] found id: ""
	I1210 07:24:40.762240  557955 logs.go:282] 0 containers: []
	W1210 07:24:40.762249  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:40.762258  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:40.762273  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:40.863814  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:40.863832  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:40.863844  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:40.908839  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:40.908877  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:40.947949  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:40.947979  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:41.033986  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:41.034023  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:43.551301  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:43.563415  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:43.563481  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:43.594316  557955 cri.go:89] found id: ""
	I1210 07:24:43.594339  557955 logs.go:282] 0 containers: []
	W1210 07:24:43.594348  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:43.594354  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:43.594410  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:43.623359  557955 cri.go:89] found id: ""
	I1210 07:24:43.623382  557955 logs.go:282] 0 containers: []
	W1210 07:24:43.623390  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:43.623397  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:43.623457  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:43.651703  557955 cri.go:89] found id: ""
	I1210 07:24:43.651725  557955 logs.go:282] 0 containers: []
	W1210 07:24:43.651732  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:43.651738  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:43.651787  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:43.684177  557955 cri.go:89] found id: ""
	I1210 07:24:43.684200  557955 logs.go:282] 0 containers: []
	W1210 07:24:43.684209  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:43.684215  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:43.684273  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:43.712619  557955 cri.go:89] found id: ""
	I1210 07:24:43.712641  557955 logs.go:282] 0 containers: []
	W1210 07:24:43.712649  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:43.712655  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:43.712710  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:43.752523  557955 cri.go:89] found id: ""
	I1210 07:24:43.752546  557955 logs.go:282] 0 containers: []
	W1210 07:24:43.752555  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:43.752561  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:43.752621  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:43.789304  557955 cri.go:89] found id: ""
	I1210 07:24:43.789325  557955 logs.go:282] 0 containers: []
	W1210 07:24:43.789334  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:43.789340  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:43.789397  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:43.831830  557955 cri.go:89] found id: ""
	I1210 07:24:43.831851  557955 logs.go:282] 0 containers: []
	W1210 07:24:43.831860  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:43.831869  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:43.831880  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:43.923485  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:43.923518  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:43.943589  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:43.943660  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:44.037360  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:44.037382  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:44.037394  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:44.080794  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:44.080891  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:46.672822  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:46.684623  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:46.684694  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:46.712229  557955 cri.go:89] found id: ""
	I1210 07:24:46.712252  557955 logs.go:282] 0 containers: []
	W1210 07:24:46.712261  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:46.712267  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:46.712329  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:46.739146  557955 cri.go:89] found id: ""
	I1210 07:24:46.739169  557955 logs.go:282] 0 containers: []
	W1210 07:24:46.739178  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:46.739184  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:46.739249  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:46.774459  557955 cri.go:89] found id: ""
	I1210 07:24:46.774482  557955 logs.go:282] 0 containers: []
	W1210 07:24:46.774491  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:46.774499  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:46.774560  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:46.801234  557955 cri.go:89] found id: ""
	I1210 07:24:46.801262  557955 logs.go:282] 0 containers: []
	W1210 07:24:46.801272  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:46.801278  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:46.801336  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:46.831447  557955 cri.go:89] found id: ""
	I1210 07:24:46.831538  557955 logs.go:282] 0 containers: []
	W1210 07:24:46.831572  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:46.831614  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:46.831710  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:46.858454  557955 cri.go:89] found id: ""
	I1210 07:24:46.858480  557955 logs.go:282] 0 containers: []
	W1210 07:24:46.858489  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:46.858496  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:46.858559  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:46.884666  557955 cri.go:89] found id: ""
	I1210 07:24:46.884693  557955 logs.go:282] 0 containers: []
	W1210 07:24:46.884702  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:46.884708  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:46.884766  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:46.915593  557955 cri.go:89] found id: ""
	I1210 07:24:46.915619  557955 logs.go:282] 0 containers: []
	W1210 07:24:46.915628  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:46.915638  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:46.915650  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:46.984600  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:46.984639  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:47.004349  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:47.004402  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:47.093586  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:47.093606  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:47.093618  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:47.146325  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:47.146367  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:49.726711  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:49.738581  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:49.738656  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:49.766159  557955 cri.go:89] found id: ""
	I1210 07:24:49.766186  557955 logs.go:282] 0 containers: []
	W1210 07:24:49.766195  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:49.766201  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:49.766264  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:49.796555  557955 cri.go:89] found id: ""
	I1210 07:24:49.796583  557955 logs.go:282] 0 containers: []
	W1210 07:24:49.796592  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:49.796599  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:49.796658  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:49.823172  557955 cri.go:89] found id: ""
	I1210 07:24:49.823198  557955 logs.go:282] 0 containers: []
	W1210 07:24:49.823207  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:49.823214  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:49.823275  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:49.857982  557955 cri.go:89] found id: ""
	I1210 07:24:49.858017  557955 logs.go:282] 0 containers: []
	W1210 07:24:49.858027  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:49.858034  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:49.858095  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:49.883076  557955 cri.go:89] found id: ""
	I1210 07:24:49.883101  557955 logs.go:282] 0 containers: []
	W1210 07:24:49.883110  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:49.883119  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:49.883203  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:49.908910  557955 cri.go:89] found id: ""
	I1210 07:24:49.908935  557955 logs.go:282] 0 containers: []
	W1210 07:24:49.908963  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:49.908970  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:49.909082  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:49.936241  557955 cri.go:89] found id: ""
	I1210 07:24:49.936268  557955 logs.go:282] 0 containers: []
	W1210 07:24:49.936278  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:49.936285  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:49.936344  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:49.963641  557955 cri.go:89] found id: ""
	I1210 07:24:49.963669  557955 logs.go:282] 0 containers: []
	W1210 07:24:49.963678  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:49.963688  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:49.963699  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:50.031225  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:50.031265  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:50.051461  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:50.051551  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:50.124788  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:50.124861  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:50.124889  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:50.172512  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:50.172594  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:52.713269  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:52.724725  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:52.724797  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:52.751570  557955 cri.go:89] found id: ""
	I1210 07:24:52.751594  557955 logs.go:282] 0 containers: []
	W1210 07:24:52.751602  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:52.751609  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:52.751667  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:52.790142  557955 cri.go:89] found id: ""
	I1210 07:24:52.790168  557955 logs.go:282] 0 containers: []
	W1210 07:24:52.790177  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:52.790183  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:52.790243  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:52.819078  557955 cri.go:89] found id: ""
	I1210 07:24:52.819104  557955 logs.go:282] 0 containers: []
	W1210 07:24:52.819113  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:52.819120  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:52.819180  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:52.855801  557955 cri.go:89] found id: ""
	I1210 07:24:52.855829  557955 logs.go:282] 0 containers: []
	W1210 07:24:52.855839  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:52.855845  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:52.855906  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:52.882047  557955 cri.go:89] found id: ""
	I1210 07:24:52.882071  557955 logs.go:282] 0 containers: []
	W1210 07:24:52.882084  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:52.882090  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:52.882153  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:52.909388  557955 cri.go:89] found id: ""
	I1210 07:24:52.909416  557955 logs.go:282] 0 containers: []
	W1210 07:24:52.909425  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:52.909432  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:52.909494  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:52.935682  557955 cri.go:89] found id: ""
	I1210 07:24:52.935710  557955 logs.go:282] 0 containers: []
	W1210 07:24:52.935720  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:52.935727  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:52.935790  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:52.962499  557955 cri.go:89] found id: ""
	I1210 07:24:52.962527  557955 logs.go:282] 0 containers: []
	W1210 07:24:52.962536  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:52.962546  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:52.962558  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:52.978965  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:52.978996  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:53.046387  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:53.046410  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:53.046424  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:53.087916  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:53.087952  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:53.118143  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:53.118174  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:55.694545  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:55.706173  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:55.706243  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:55.735961  557955 cri.go:89] found id: ""
	I1210 07:24:55.735987  557955 logs.go:282] 0 containers: []
	W1210 07:24:55.735997  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:55.736004  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:55.736065  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:55.763665  557955 cri.go:89] found id: ""
	I1210 07:24:55.763693  557955 logs.go:282] 0 containers: []
	W1210 07:24:55.763702  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:55.763708  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:55.763768  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:55.790965  557955 cri.go:89] found id: ""
	I1210 07:24:55.790992  557955 logs.go:282] 0 containers: []
	W1210 07:24:55.791002  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:55.791008  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:55.791067  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:55.817674  557955 cri.go:89] found id: ""
	I1210 07:24:55.817701  557955 logs.go:282] 0 containers: []
	W1210 07:24:55.817711  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:55.817717  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:55.817774  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:55.851435  557955 cri.go:89] found id: ""
	I1210 07:24:55.851462  557955 logs.go:282] 0 containers: []
	W1210 07:24:55.851471  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:55.851477  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:55.851537  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:55.878198  557955 cri.go:89] found id: ""
	I1210 07:24:55.878275  557955 logs.go:282] 0 containers: []
	W1210 07:24:55.878292  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:55.878300  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:55.878368  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:55.908610  557955 cri.go:89] found id: ""
	I1210 07:24:55.908633  557955 logs.go:282] 0 containers: []
	W1210 07:24:55.908643  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:55.908649  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:55.908712  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:55.938119  557955 cri.go:89] found id: ""
	I1210 07:24:55.938144  557955 logs.go:282] 0 containers: []
	W1210 07:24:55.938153  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:55.938162  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:55.938173  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:55.978890  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:55.978928  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:24:56.014799  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:56.014830  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:56.083595  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:56.083632  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:56.100388  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:56.100424  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:56.189060  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:58.689359  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:24:58.700948  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:24:58.701024  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:24:58.730949  557955 cri.go:89] found id: ""
	I1210 07:24:58.730972  557955 logs.go:282] 0 containers: []
	W1210 07:24:58.730981  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:24:58.730988  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:24:58.731046  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:24:58.757858  557955 cri.go:89] found id: ""
	I1210 07:24:58.757883  557955 logs.go:282] 0 containers: []
	W1210 07:24:58.757892  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:24:58.757899  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:24:58.757959  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:24:58.788614  557955 cri.go:89] found id: ""
	I1210 07:24:58.788639  557955 logs.go:282] 0 containers: []
	W1210 07:24:58.788648  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:24:58.788656  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:24:58.788712  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:24:58.819673  557955 cri.go:89] found id: ""
	I1210 07:24:58.819759  557955 logs.go:282] 0 containers: []
	W1210 07:24:58.819783  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:24:58.819803  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:24:58.819908  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:24:58.857252  557955 cri.go:89] found id: ""
	I1210 07:24:58.857275  557955 logs.go:282] 0 containers: []
	W1210 07:24:58.857285  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:24:58.857292  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:24:58.857354  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:24:58.885846  557955 cri.go:89] found id: ""
	I1210 07:24:58.885880  557955 logs.go:282] 0 containers: []
	W1210 07:24:58.885889  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:24:58.885896  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:24:58.885957  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:24:58.916492  557955 cri.go:89] found id: ""
	I1210 07:24:58.916513  557955 logs.go:282] 0 containers: []
	W1210 07:24:58.916522  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:24:58.916528  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:24:58.916583  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:24:58.944258  557955 cri.go:89] found id: ""
	I1210 07:24:58.944280  557955 logs.go:282] 0 containers: []
	W1210 07:24:58.944289  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:24:58.944299  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:24:58.944310  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:24:59.013169  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:24:59.013212  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:24:59.030428  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:24:59.030460  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:24:59.098703  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:24:59.098722  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:24:59.098734  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:24:59.140609  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:24:59.140693  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:01.675061  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:01.686953  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:01.687030  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:01.714476  557955 cri.go:89] found id: ""
	I1210 07:25:01.714507  557955 logs.go:282] 0 containers: []
	W1210 07:25:01.714516  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:01.714523  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:01.714581  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:01.740258  557955 cri.go:89] found id: ""
	I1210 07:25:01.740285  557955 logs.go:282] 0 containers: []
	W1210 07:25:01.740294  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:01.740301  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:01.740359  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:01.772087  557955 cri.go:89] found id: ""
	I1210 07:25:01.772119  557955 logs.go:282] 0 containers: []
	W1210 07:25:01.772129  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:01.772135  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:01.772195  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:01.799686  557955 cri.go:89] found id: ""
	I1210 07:25:01.799714  557955 logs.go:282] 0 containers: []
	W1210 07:25:01.799722  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:01.799729  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:01.799783  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:01.830928  557955 cri.go:89] found id: ""
	I1210 07:25:01.830951  557955 logs.go:282] 0 containers: []
	W1210 07:25:01.830959  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:01.830965  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:01.831032  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:01.858032  557955 cri.go:89] found id: ""
	I1210 07:25:01.858054  557955 logs.go:282] 0 containers: []
	W1210 07:25:01.858063  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:01.858070  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:01.858134  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:01.884581  557955 cri.go:89] found id: ""
	I1210 07:25:01.884602  557955 logs.go:282] 0 containers: []
	W1210 07:25:01.884611  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:01.884617  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:01.884672  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:01.912062  557955 cri.go:89] found id: ""
	I1210 07:25:01.912085  557955 logs.go:282] 0 containers: []
	W1210 07:25:01.912094  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:01.912103  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:01.912114  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:01.985088  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:01.985129  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:02.011353  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:02.011394  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:02.082172  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:02.082198  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:02.082215  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:02.124547  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:02.124578  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:04.661314  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:04.673902  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:04.673979  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:04.710271  557955 cri.go:89] found id: ""
	I1210 07:25:04.710301  557955 logs.go:282] 0 containers: []
	W1210 07:25:04.710311  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:04.710318  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:04.710384  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:04.742679  557955 cri.go:89] found id: ""
	I1210 07:25:04.742707  557955 logs.go:282] 0 containers: []
	W1210 07:25:04.742717  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:04.742724  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:04.742789  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:04.770855  557955 cri.go:89] found id: ""
	I1210 07:25:04.770888  557955 logs.go:282] 0 containers: []
	W1210 07:25:04.770898  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:04.770904  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:04.770964  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:04.797631  557955 cri.go:89] found id: ""
	I1210 07:25:04.797655  557955 logs.go:282] 0 containers: []
	W1210 07:25:04.797676  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:04.797682  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:04.797743  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:04.824902  557955 cri.go:89] found id: ""
	I1210 07:25:04.824930  557955 logs.go:282] 0 containers: []
	W1210 07:25:04.824939  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:04.824945  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:04.825018  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:04.856387  557955 cri.go:89] found id: ""
	I1210 07:25:04.856426  557955 logs.go:282] 0 containers: []
	W1210 07:25:04.856440  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:04.856447  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:04.856514  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:04.883688  557955 cri.go:89] found id: ""
	I1210 07:25:04.883714  557955 logs.go:282] 0 containers: []
	W1210 07:25:04.883723  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:04.883730  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:04.883795  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:04.910919  557955 cri.go:89] found id: ""
	I1210 07:25:04.910944  557955 logs.go:282] 0 containers: []
	W1210 07:25:04.910955  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:04.910964  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:04.910977  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:04.980844  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:04.980884  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:04.998215  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:04.998249  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:05.068623  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:05.068646  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:05.068670  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:05.115603  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:05.115650  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:07.649350  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:07.661125  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:07.661216  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:07.688199  557955 cri.go:89] found id: ""
	I1210 07:25:07.688227  557955 logs.go:282] 0 containers: []
	W1210 07:25:07.688236  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:07.688242  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:07.688306  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:07.714412  557955 cri.go:89] found id: ""
	I1210 07:25:07.714436  557955 logs.go:282] 0 containers: []
	W1210 07:25:07.714445  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:07.714451  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:07.714510  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:07.743664  557955 cri.go:89] found id: ""
	I1210 07:25:07.743688  557955 logs.go:282] 0 containers: []
	W1210 07:25:07.743698  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:07.743705  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:07.743769  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:07.771584  557955 cri.go:89] found id: ""
	I1210 07:25:07.771607  557955 logs.go:282] 0 containers: []
	W1210 07:25:07.771616  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:07.771622  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:07.771679  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:07.802281  557955 cri.go:89] found id: ""
	I1210 07:25:07.802350  557955 logs.go:282] 0 containers: []
	W1210 07:25:07.802367  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:07.802374  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:07.802445  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:07.830827  557955 cri.go:89] found id: ""
	I1210 07:25:07.830852  557955 logs.go:282] 0 containers: []
	W1210 07:25:07.830861  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:07.830868  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:07.830927  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:07.864909  557955 cri.go:89] found id: ""
	I1210 07:25:07.864939  557955 logs.go:282] 0 containers: []
	W1210 07:25:07.864948  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:07.864955  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:07.865027  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:07.892256  557955 cri.go:89] found id: ""
	I1210 07:25:07.892282  557955 logs.go:282] 0 containers: []
	W1210 07:25:07.892291  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:07.892300  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:07.892311  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:07.962248  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:07.962292  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:07.979110  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:07.979142  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:08.056313  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:08.056338  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:08.056361  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:08.098218  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:08.098261  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:10.639858  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:10.651755  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:10.651865  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:10.678656  557955 cri.go:89] found id: ""
	I1210 07:25:10.678683  557955 logs.go:282] 0 containers: []
	W1210 07:25:10.678692  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:10.678699  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:10.678764  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:10.705284  557955 cri.go:89] found id: ""
	I1210 07:25:10.705309  557955 logs.go:282] 0 containers: []
	W1210 07:25:10.705330  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:10.705337  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:10.705393  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:10.731266  557955 cri.go:89] found id: ""
	I1210 07:25:10.731337  557955 logs.go:282] 0 containers: []
	W1210 07:25:10.731353  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:10.731360  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:10.731432  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:10.758190  557955 cri.go:89] found id: ""
	I1210 07:25:10.758218  557955 logs.go:282] 0 containers: []
	W1210 07:25:10.758227  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:10.758234  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:10.758294  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:10.785524  557955 cri.go:89] found id: ""
	I1210 07:25:10.785551  557955 logs.go:282] 0 containers: []
	W1210 07:25:10.785560  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:10.785567  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:10.785625  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:10.812393  557955 cri.go:89] found id: ""
	I1210 07:25:10.812427  557955 logs.go:282] 0 containers: []
	W1210 07:25:10.812437  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:10.812444  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:10.812511  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:10.842712  557955 cri.go:89] found id: ""
	I1210 07:25:10.842782  557955 logs.go:282] 0 containers: []
	W1210 07:25:10.842808  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:10.842827  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:10.842921  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:10.871375  557955 cri.go:89] found id: ""
	I1210 07:25:10.871402  557955 logs.go:282] 0 containers: []
	W1210 07:25:10.871411  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:10.871421  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:10.871462  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:10.934760  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:10.934782  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:10.934795  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:10.976201  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:10.976239  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:11.011854  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:11.011885  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:11.085413  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:11.085456  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:13.602984  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:13.616200  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:13.616272  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:13.647473  557955 cri.go:89] found id: ""
	I1210 07:25:13.647498  557955 logs.go:282] 0 containers: []
	W1210 07:25:13.647506  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:13.647513  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:13.647572  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:13.683550  557955 cri.go:89] found id: ""
	I1210 07:25:13.683573  557955 logs.go:282] 0 containers: []
	W1210 07:25:13.683582  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:13.683588  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:13.683647  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:13.719274  557955 cri.go:89] found id: ""
	I1210 07:25:13.719300  557955 logs.go:282] 0 containers: []
	W1210 07:25:13.719309  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:13.719315  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:13.719379  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:13.751664  557955 cri.go:89] found id: ""
	I1210 07:25:13.751689  557955 logs.go:282] 0 containers: []
	W1210 07:25:13.751698  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:13.751705  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:13.751763  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:13.791543  557955 cri.go:89] found id: ""
	I1210 07:25:13.791567  557955 logs.go:282] 0 containers: []
	W1210 07:25:13.791575  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:13.791582  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:13.791640  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:13.856590  557955 cri.go:89] found id: ""
	I1210 07:25:13.856614  557955 logs.go:282] 0 containers: []
	W1210 07:25:13.856623  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:13.856629  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:13.856695  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:13.908184  557955 cri.go:89] found id: ""
	I1210 07:25:13.908208  557955 logs.go:282] 0 containers: []
	W1210 07:25:13.908216  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:13.908222  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:13.908283  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:13.947764  557955 cri.go:89] found id: ""
	I1210 07:25:13.947845  557955 logs.go:282] 0 containers: []
	W1210 07:25:13.947869  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:13.947892  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:13.947935  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:13.994641  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:13.994722  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:14.035688  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:14.035714  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:14.115155  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:14.115197  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:14.145718  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:14.145751  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:14.276877  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:16.777206  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:16.789204  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:16.789282  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:16.816611  557955 cri.go:89] found id: ""
	I1210 07:25:16.816641  557955 logs.go:282] 0 containers: []
	W1210 07:25:16.816650  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:16.816657  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:16.816717  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:16.849499  557955 cri.go:89] found id: ""
	I1210 07:25:16.849526  557955 logs.go:282] 0 containers: []
	W1210 07:25:16.849535  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:16.849542  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:16.849607  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:16.878186  557955 cri.go:89] found id: ""
	I1210 07:25:16.878217  557955 logs.go:282] 0 containers: []
	W1210 07:25:16.878227  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:16.878233  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:16.878295  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:16.906598  557955 cri.go:89] found id: ""
	I1210 07:25:16.906623  557955 logs.go:282] 0 containers: []
	W1210 07:25:16.906633  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:16.906640  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:16.906708  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:16.933481  557955 cri.go:89] found id: ""
	I1210 07:25:16.933508  557955 logs.go:282] 0 containers: []
	W1210 07:25:16.933517  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:16.933524  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:16.933583  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:16.961361  557955 cri.go:89] found id: ""
	I1210 07:25:16.961440  557955 logs.go:282] 0 containers: []
	W1210 07:25:16.961478  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:16.961504  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:16.961601  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:16.992024  557955 cri.go:89] found id: ""
	I1210 07:25:16.992051  557955 logs.go:282] 0 containers: []
	W1210 07:25:16.992060  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:16.992067  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:16.992131  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:17.021020  557955 cri.go:89] found id: ""
	I1210 07:25:17.021045  557955 logs.go:282] 0 containers: []
	W1210 07:25:17.021054  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:17.021064  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:17.021077  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:17.088839  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:17.088880  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:17.108162  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:17.108208  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:17.197430  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:17.197460  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:17.197473  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:17.240237  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:17.240277  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:19.774892  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:19.786981  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:19.787064  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:19.813280  557955 cri.go:89] found id: ""
	I1210 07:25:19.813359  557955 logs.go:282] 0 containers: []
	W1210 07:25:19.813396  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:19.813423  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:19.813514  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:19.848628  557955 cri.go:89] found id: ""
	I1210 07:25:19.848655  557955 logs.go:282] 0 containers: []
	W1210 07:25:19.848664  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:19.848671  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:19.848739  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:19.875928  557955 cri.go:89] found id: ""
	I1210 07:25:19.875953  557955 logs.go:282] 0 containers: []
	W1210 07:25:19.875963  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:19.875969  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:19.876032  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:19.903964  557955 cri.go:89] found id: ""
	I1210 07:25:19.903991  557955 logs.go:282] 0 containers: []
	W1210 07:25:19.904001  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:19.904007  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:19.904095  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:19.933337  557955 cri.go:89] found id: ""
	I1210 07:25:19.933366  557955 logs.go:282] 0 containers: []
	W1210 07:25:19.933375  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:19.933382  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:19.933448  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:19.961214  557955 cri.go:89] found id: ""
	I1210 07:25:19.961243  557955 logs.go:282] 0 containers: []
	W1210 07:25:19.961252  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:19.961262  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:19.961329  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:19.990077  557955 cri.go:89] found id: ""
	I1210 07:25:19.990150  557955 logs.go:282] 0 containers: []
	W1210 07:25:19.990167  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:19.990174  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:19.990237  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:20.028820  557955 cri.go:89] found id: ""
	I1210 07:25:20.028871  557955 logs.go:282] 0 containers: []
	W1210 07:25:20.028881  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:20.028893  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:20.028910  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:20.103657  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:20.103700  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:20.121387  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:20.121422  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:20.207314  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:20.207335  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:20.207347  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:20.249162  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:20.249210  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:22.779861  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:22.791356  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:22.791433  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:22.818367  557955 cri.go:89] found id: ""
	I1210 07:25:22.818393  557955 logs.go:282] 0 containers: []
	W1210 07:25:22.818403  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:22.818409  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:22.818470  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:22.851372  557955 cri.go:89] found id: ""
	I1210 07:25:22.851402  557955 logs.go:282] 0 containers: []
	W1210 07:25:22.851410  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:22.851416  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:22.851473  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:22.878311  557955 cri.go:89] found id: ""
	I1210 07:25:22.878335  557955 logs.go:282] 0 containers: []
	W1210 07:25:22.878343  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:22.878350  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:22.878408  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:22.905287  557955 cri.go:89] found id: ""
	I1210 07:25:22.905310  557955 logs.go:282] 0 containers: []
	W1210 07:25:22.905319  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:22.905325  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:22.905405  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:22.935815  557955 cri.go:89] found id: ""
	I1210 07:25:22.935838  557955 logs.go:282] 0 containers: []
	W1210 07:25:22.935846  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:22.935853  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:22.935912  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:22.966002  557955 cri.go:89] found id: ""
	I1210 07:25:22.966027  557955 logs.go:282] 0 containers: []
	W1210 07:25:22.966036  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:22.966042  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:22.966102  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:22.992523  557955 cri.go:89] found id: ""
	I1210 07:25:22.992546  557955 logs.go:282] 0 containers: []
	W1210 07:25:22.992555  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:22.992562  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:22.992622  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:23.021971  557955 cri.go:89] found id: ""
	I1210 07:25:23.022038  557955 logs.go:282] 0 containers: []
	W1210 07:25:23.022062  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:23.022087  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:23.022126  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:23.039264  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:23.039346  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:23.114519  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:23.114593  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:23.114621  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:23.163560  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:23.163598  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:23.195658  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:23.195691  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:25.764399  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:25.776071  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:25.776142  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:25.803892  557955 cri.go:89] found id: ""
	I1210 07:25:25.803922  557955 logs.go:282] 0 containers: []
	W1210 07:25:25.803932  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:25.803938  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:25.803997  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:25.829706  557955 cri.go:89] found id: ""
	I1210 07:25:25.829736  557955 logs.go:282] 0 containers: []
	W1210 07:25:25.829747  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:25.829755  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:25.829816  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:25.860062  557955 cri.go:89] found id: ""
	I1210 07:25:25.860090  557955 logs.go:282] 0 containers: []
	W1210 07:25:25.860100  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:25.860106  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:25.860165  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:25.888286  557955 cri.go:89] found id: ""
	I1210 07:25:25.888313  557955 logs.go:282] 0 containers: []
	W1210 07:25:25.888322  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:25.888328  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:25.888386  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:25.915217  557955 cri.go:89] found id: ""
	I1210 07:25:25.915252  557955 logs.go:282] 0 containers: []
	W1210 07:25:25.915261  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:25.915269  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:25.915331  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:25.943138  557955 cri.go:89] found id: ""
	I1210 07:25:25.943162  557955 logs.go:282] 0 containers: []
	W1210 07:25:25.943170  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:25.943177  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:25.943240  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:25.970164  557955 cri.go:89] found id: ""
	I1210 07:25:25.970190  557955 logs.go:282] 0 containers: []
	W1210 07:25:25.970199  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:25.970206  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:25.970267  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:25.996685  557955 cri.go:89] found id: ""
	I1210 07:25:25.996718  557955 logs.go:282] 0 containers: []
	W1210 07:25:25.996728  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:25.996737  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:25.996755  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:26.066732  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:26.066770  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:26.084712  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:26.084747  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:26.168234  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:26.168315  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:26.168343  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:26.215736  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:26.215774  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:28.747094  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:28.758855  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:28.758926  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:28.788127  557955 cri.go:89] found id: ""
	I1210 07:25:28.788152  557955 logs.go:282] 0 containers: []
	W1210 07:25:28.788162  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:28.788168  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:28.788228  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:28.813580  557955 cri.go:89] found id: ""
	I1210 07:25:28.813606  557955 logs.go:282] 0 containers: []
	W1210 07:25:28.813616  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:28.813622  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:28.813679  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:28.848933  557955 cri.go:89] found id: ""
	I1210 07:25:28.848958  557955 logs.go:282] 0 containers: []
	W1210 07:25:28.848967  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:28.848974  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:28.849055  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:28.875525  557955 cri.go:89] found id: ""
	I1210 07:25:28.875554  557955 logs.go:282] 0 containers: []
	W1210 07:25:28.875564  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:28.875570  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:28.875630  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:28.906115  557955 cri.go:89] found id: ""
	I1210 07:25:28.906143  557955 logs.go:282] 0 containers: []
	W1210 07:25:28.906154  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:28.906160  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:28.906261  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:28.934553  557955 cri.go:89] found id: ""
	I1210 07:25:28.934581  557955 logs.go:282] 0 containers: []
	W1210 07:25:28.934591  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:28.934598  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:28.934667  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:28.961635  557955 cri.go:89] found id: ""
	I1210 07:25:28.961660  557955 logs.go:282] 0 containers: []
	W1210 07:25:28.961669  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:28.961676  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:28.961747  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:28.988112  557955 cri.go:89] found id: ""
	I1210 07:25:28.988142  557955 logs.go:282] 0 containers: []
	W1210 07:25:28.988152  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:28.988162  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:28.988174  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:29.057958  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:29.058000  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:29.074718  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:29.074749  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:29.154749  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:29.154768  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:29.154780  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:29.204036  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:29.204078  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:31.734662  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:31.746925  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:31.746998  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:31.773704  557955 cri.go:89] found id: ""
	I1210 07:25:31.773729  557955 logs.go:282] 0 containers: []
	W1210 07:25:31.773737  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:31.773744  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:31.773801  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:31.800897  557955 cri.go:89] found id: ""
	I1210 07:25:31.800925  557955 logs.go:282] 0 containers: []
	W1210 07:25:31.800934  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:31.800941  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:31.801025  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:31.827934  557955 cri.go:89] found id: ""
	I1210 07:25:31.827964  557955 logs.go:282] 0 containers: []
	W1210 07:25:31.827973  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:31.827979  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:31.828038  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:31.864390  557955 cri.go:89] found id: ""
	I1210 07:25:31.864418  557955 logs.go:282] 0 containers: []
	W1210 07:25:31.864427  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:31.864433  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:31.864506  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:31.892638  557955 cri.go:89] found id: ""
	I1210 07:25:31.892665  557955 logs.go:282] 0 containers: []
	W1210 07:25:31.892675  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:31.892681  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:31.892741  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:31.919334  557955 cri.go:89] found id: ""
	I1210 07:25:31.919362  557955 logs.go:282] 0 containers: []
	W1210 07:25:31.919370  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:31.919377  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:31.919437  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:31.946178  557955 cri.go:89] found id: ""
	I1210 07:25:31.946204  557955 logs.go:282] 0 containers: []
	W1210 07:25:31.946213  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:31.946222  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:31.946296  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:31.972901  557955 cri.go:89] found id: ""
	I1210 07:25:31.972982  557955 logs.go:282] 0 containers: []
	W1210 07:25:31.973014  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:31.973051  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:31.973084  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:32.041107  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:32.041127  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:32.041140  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:32.082135  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:32.082171  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:32.122042  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:32.122072  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:32.206007  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:32.206044  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:34.723408  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:34.735546  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:34.735616  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:34.762742  557955 cri.go:89] found id: ""
	I1210 07:25:34.762770  557955 logs.go:282] 0 containers: []
	W1210 07:25:34.762779  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:34.762786  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:34.762852  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:34.789645  557955 cri.go:89] found id: ""
	I1210 07:25:34.789672  557955 logs.go:282] 0 containers: []
	W1210 07:25:34.789681  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:34.789687  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:34.789744  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:34.815873  557955 cri.go:89] found id: ""
	I1210 07:25:34.815903  557955 logs.go:282] 0 containers: []
	W1210 07:25:34.815912  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:34.815919  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:34.815978  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:34.848435  557955 cri.go:89] found id: ""
	I1210 07:25:34.848462  557955 logs.go:282] 0 containers: []
	W1210 07:25:34.848472  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:34.848478  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:34.848536  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:34.875978  557955 cri.go:89] found id: ""
	I1210 07:25:34.876004  557955 logs.go:282] 0 containers: []
	W1210 07:25:34.876014  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:34.876044  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:34.876127  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:34.902487  557955 cri.go:89] found id: ""
	I1210 07:25:34.902512  557955 logs.go:282] 0 containers: []
	W1210 07:25:34.902521  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:34.902528  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:34.902618  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:34.930409  557955 cri.go:89] found id: ""
	I1210 07:25:34.930435  557955 logs.go:282] 0 containers: []
	W1210 07:25:34.930444  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:34.930451  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:34.930510  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:34.958563  557955 cri.go:89] found id: ""
	I1210 07:25:34.958590  557955 logs.go:282] 0 containers: []
	W1210 07:25:34.958599  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:34.958608  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:34.958620  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:35.025987  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:35.026036  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:35.047016  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:35.047048  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:35.126061  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:35.126085  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:35.126099  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:35.174614  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:35.174676  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:37.718376  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:37.735370  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:37.735447  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:37.781481  557955 cri.go:89] found id: ""
	I1210 07:25:37.781511  557955 logs.go:282] 0 containers: []
	W1210 07:25:37.781521  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:37.781528  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:37.781593  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:37.815624  557955 cri.go:89] found id: ""
	I1210 07:25:37.815652  557955 logs.go:282] 0 containers: []
	W1210 07:25:37.815662  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:37.815668  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:37.815724  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:37.858216  557955 cri.go:89] found id: ""
	I1210 07:25:37.858244  557955 logs.go:282] 0 containers: []
	W1210 07:25:37.858252  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:37.858258  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:37.858320  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:37.887390  557955 cri.go:89] found id: ""
	I1210 07:25:37.887419  557955 logs.go:282] 0 containers: []
	W1210 07:25:37.887428  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:37.887437  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:37.887493  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:37.917977  557955 cri.go:89] found id: ""
	I1210 07:25:37.918010  557955 logs.go:282] 0 containers: []
	W1210 07:25:37.918020  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:37.918026  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:37.918083  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:37.958432  557955 cri.go:89] found id: ""
	I1210 07:25:37.958462  557955 logs.go:282] 0 containers: []
	W1210 07:25:37.958471  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:37.958478  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:37.958539  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:37.994067  557955 cri.go:89] found id: ""
	I1210 07:25:37.994096  557955 logs.go:282] 0 containers: []
	W1210 07:25:37.994105  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:37.994111  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:37.994171  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:38.028462  557955 cri.go:89] found id: ""
	I1210 07:25:38.028493  557955 logs.go:282] 0 containers: []
	W1210 07:25:38.028504  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:38.028514  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:38.028526  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:38.099195  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:38.099239  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:38.116705  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:38.116734  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:38.202022  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:38.202092  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:38.202120  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:38.242972  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:38.243009  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:40.778383  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:40.795287  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:40.795376  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:40.830675  557955 cri.go:89] found id: ""
	I1210 07:25:40.830710  557955 logs.go:282] 0 containers: []
	W1210 07:25:40.830720  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:40.830726  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:40.830794  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:40.879918  557955 cri.go:89] found id: ""
	I1210 07:25:40.879945  557955 logs.go:282] 0 containers: []
	W1210 07:25:40.879954  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:40.879961  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:40.880022  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:40.943049  557955 cri.go:89] found id: ""
	I1210 07:25:40.943073  557955 logs.go:282] 0 containers: []
	W1210 07:25:40.943082  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:40.943088  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:40.943152  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:40.980265  557955 cri.go:89] found id: ""
	I1210 07:25:40.980291  557955 logs.go:282] 0 containers: []
	W1210 07:25:40.980301  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:40.980307  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:40.980389  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:41.026075  557955 cri.go:89] found id: ""
	I1210 07:25:41.026099  557955 logs.go:282] 0 containers: []
	W1210 07:25:41.026115  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:41.026131  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:41.026208  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:41.075925  557955 cri.go:89] found id: ""
	I1210 07:25:41.075957  557955 logs.go:282] 0 containers: []
	W1210 07:25:41.075966  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:41.075972  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:41.076031  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:41.114496  557955 cri.go:89] found id: ""
	I1210 07:25:41.114528  557955 logs.go:282] 0 containers: []
	W1210 07:25:41.114537  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:41.114544  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:41.114608  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:41.207760  557955 cri.go:89] found id: ""
	I1210 07:25:41.207795  557955 logs.go:282] 0 containers: []
	W1210 07:25:41.207804  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:41.207824  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:41.207841  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:41.305332  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:41.305365  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:41.327010  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:41.327094  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:41.415196  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:41.415213  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:41.415225  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:41.469736  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:41.469792  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:44.014404  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:44.026946  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:44.027040  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:44.062745  557955 cri.go:89] found id: ""
	I1210 07:25:44.062767  557955 logs.go:282] 0 containers: []
	W1210 07:25:44.062776  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:44.062782  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:44.062853  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:44.098815  557955 cri.go:89] found id: ""
	I1210 07:25:44.098850  557955 logs.go:282] 0 containers: []
	W1210 07:25:44.098863  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:44.098874  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:44.098950  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:44.140786  557955 cri.go:89] found id: ""
	I1210 07:25:44.140822  557955 logs.go:282] 0 containers: []
	W1210 07:25:44.140844  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:44.140850  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:44.140925  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:44.227765  557955 cri.go:89] found id: ""
	I1210 07:25:44.227793  557955 logs.go:282] 0 containers: []
	W1210 07:25:44.227802  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:44.227808  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:44.227880  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:44.265373  557955 cri.go:89] found id: ""
	I1210 07:25:44.265410  557955 logs.go:282] 0 containers: []
	W1210 07:25:44.265419  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:44.265437  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:44.265506  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:44.308624  557955 cri.go:89] found id: ""
	I1210 07:25:44.308650  557955 logs.go:282] 0 containers: []
	W1210 07:25:44.308659  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:44.308666  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:44.308723  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:44.343257  557955 cri.go:89] found id: ""
	I1210 07:25:44.343280  557955 logs.go:282] 0 containers: []
	W1210 07:25:44.343289  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:44.343295  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:44.343361  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:44.372388  557955 cri.go:89] found id: ""
	I1210 07:25:44.372410  557955 logs.go:282] 0 containers: []
	W1210 07:25:44.372418  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:44.372427  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:44.372438  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:44.434183  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:44.434273  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:44.479491  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:44.479519  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:44.559601  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:44.559642  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:44.588848  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:44.588877  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:44.686957  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:47.188796  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:47.200446  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:47.200516  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:47.232222  557955 cri.go:89] found id: ""
	I1210 07:25:47.232247  557955 logs.go:282] 0 containers: []
	W1210 07:25:47.232258  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:47.232265  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:47.232324  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:47.257653  557955 cri.go:89] found id: ""
	I1210 07:25:47.257681  557955 logs.go:282] 0 containers: []
	W1210 07:25:47.257690  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:47.257697  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:47.257756  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:47.286699  557955 cri.go:89] found id: ""
	I1210 07:25:47.286726  557955 logs.go:282] 0 containers: []
	W1210 07:25:47.286735  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:47.286741  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:47.286799  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:47.313310  557955 cri.go:89] found id: ""
	I1210 07:25:47.313343  557955 logs.go:282] 0 containers: []
	W1210 07:25:47.313353  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:47.313359  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:47.313420  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:47.345430  557955 cri.go:89] found id: ""
	I1210 07:25:47.345455  557955 logs.go:282] 0 containers: []
	W1210 07:25:47.345464  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:47.345471  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:47.345531  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:47.376475  557955 cri.go:89] found id: ""
	I1210 07:25:47.376499  557955 logs.go:282] 0 containers: []
	W1210 07:25:47.376507  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:47.376514  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:47.376573  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:47.408760  557955 cri.go:89] found id: ""
	I1210 07:25:47.408785  557955 logs.go:282] 0 containers: []
	W1210 07:25:47.408794  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:47.408801  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:47.408860  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:47.438433  557955 cri.go:89] found id: ""
	I1210 07:25:47.438456  557955 logs.go:282] 0 containers: []
	W1210 07:25:47.438465  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:47.438474  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:47.438486  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:47.507971  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:47.507992  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:47.508005  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:47.549180  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:47.549229  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:47.579596  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:47.579625  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:47.650588  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:47.650666  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:50.169678  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:50.183001  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:50.183091  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:50.212467  557955 cri.go:89] found id: ""
	I1210 07:25:50.212491  557955 logs.go:282] 0 containers: []
	W1210 07:25:50.212499  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:50.212518  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:50.212593  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:50.238535  557955 cri.go:89] found id: ""
	I1210 07:25:50.238563  557955 logs.go:282] 0 containers: []
	W1210 07:25:50.238572  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:50.238578  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:50.238648  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:50.272966  557955 cri.go:89] found id: ""
	I1210 07:25:50.272991  557955 logs.go:282] 0 containers: []
	W1210 07:25:50.273001  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:50.273007  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:50.273078  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:50.300211  557955 cri.go:89] found id: ""
	I1210 07:25:50.300233  557955 logs.go:282] 0 containers: []
	W1210 07:25:50.300242  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:50.300249  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:50.300306  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:50.327321  557955 cri.go:89] found id: ""
	I1210 07:25:50.327347  557955 logs.go:282] 0 containers: []
	W1210 07:25:50.327356  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:50.327362  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:50.327419  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:50.358905  557955 cri.go:89] found id: ""
	I1210 07:25:50.358928  557955 logs.go:282] 0 containers: []
	W1210 07:25:50.358937  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:50.358943  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:50.359007  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:50.385377  557955 cri.go:89] found id: ""
	I1210 07:25:50.385405  557955 logs.go:282] 0 containers: []
	W1210 07:25:50.385415  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:50.385422  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:50.385484  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:50.411164  557955 cri.go:89] found id: ""
	I1210 07:25:50.411191  557955 logs.go:282] 0 containers: []
	W1210 07:25:50.411201  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:50.411211  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:50.411223  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:50.452795  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:50.452832  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:50.481447  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:50.481478  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:50.554149  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:50.554188  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:50.572075  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:50.572105  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:50.644534  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:53.145266  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:53.158440  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:25:53.158518  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:25:53.191633  557955 cri.go:89] found id: ""
	I1210 07:25:53.191665  557955 logs.go:282] 0 containers: []
	W1210 07:25:53.191674  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:25:53.191681  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:25:53.191744  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:25:53.227374  557955 cri.go:89] found id: ""
	I1210 07:25:53.227400  557955 logs.go:282] 0 containers: []
	W1210 07:25:53.227410  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:25:53.227417  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:25:53.227478  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:25:53.255593  557955 cri.go:89] found id: ""
	I1210 07:25:53.255620  557955 logs.go:282] 0 containers: []
	W1210 07:25:53.255629  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:25:53.255635  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:25:53.255693  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:25:53.284764  557955 cri.go:89] found id: ""
	I1210 07:25:53.284793  557955 logs.go:282] 0 containers: []
	W1210 07:25:53.284803  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:25:53.284810  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:25:53.284925  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:25:53.310996  557955 cri.go:89] found id: ""
	I1210 07:25:53.311023  557955 logs.go:282] 0 containers: []
	W1210 07:25:53.311032  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:25:53.311038  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:25:53.311097  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:25:53.336820  557955 cri.go:89] found id: ""
	I1210 07:25:53.336848  557955 logs.go:282] 0 containers: []
	W1210 07:25:53.336857  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:25:53.336864  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:25:53.336923  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:25:53.364099  557955 cri.go:89] found id: ""
	I1210 07:25:53.364128  557955 logs.go:282] 0 containers: []
	W1210 07:25:53.364136  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:25:53.364143  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:25:53.364207  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:25:53.392724  557955 cri.go:89] found id: ""
	I1210 07:25:53.392753  557955 logs.go:282] 0 containers: []
	W1210 07:25:53.392762  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:25:53.392770  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:25:53.392782  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:25:53.409301  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:25:53.409331  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:25:53.481464  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:25:53.481483  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:25:53.481495  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:25:53.521799  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:25:53.521838  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:25:53.552950  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:25:53.552980  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:25:56.121308  557955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:25:56.135631  557955 kubeadm.go:602] duration metric: took 4m3.265013126s to restartPrimaryControlPlane
	W1210 07:25:56.135700  557955 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 07:25:56.135766  557955 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 07:25:56.566194  557955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:25:56.579388  557955 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:25:56.588731  557955 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:25:56.588798  557955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:25:56.597665  557955 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:25:56.597689  557955 kubeadm.go:158] found existing configuration files:
	
	I1210 07:25:56.597743  557955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:25:56.607002  557955 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:25:56.607065  557955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:25:56.614872  557955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:25:56.623106  557955 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:25:56.623169  557955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:25:56.630946  557955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:25:56.639073  557955 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:25:56.639164  557955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:25:56.647087  557955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:25:56.655430  557955 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:25:56.655502  557955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:25:56.663415  557955 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:25:56.706950  557955 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:25:56.707218  557955 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:25:56.783761  557955 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:25:56.783843  557955 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:25:56.783877  557955 kubeadm.go:319] OS: Linux
	I1210 07:25:56.783920  557955 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:25:56.783965  557955 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:25:56.784010  557955 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:25:56.784055  557955 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:25:56.784100  557955 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:25:56.784145  557955 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:25:56.784188  557955 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:25:56.784235  557955 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:25:56.784278  557955 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:25:56.863641  557955 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:25:56.863800  557955 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:25:56.863918  557955 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:25:56.881747  557955 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:25:56.889354  557955 out.go:252]   - Generating certificates and keys ...
	I1210 07:25:56.889523  557955 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:25:56.889642  557955 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:25:56.889766  557955 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:25:56.889864  557955 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:25:56.889939  557955 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:25:56.889994  557955 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:25:56.890057  557955 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:25:56.890119  557955 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:25:56.890193  557955 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:25:56.890268  557955 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:25:56.890306  557955 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:25:56.890362  557955 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:25:57.325738  557955 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:25:57.546197  557955 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:25:58.338502  557955 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:25:59.311093  557955 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:25:59.525907  557955 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:25:59.526502  557955 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:25:59.529126  557955 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:25:59.532930  557955 out.go:252]   - Booting up control plane ...
	I1210 07:25:59.533073  557955 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:25:59.533174  557955 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:25:59.541091  557955 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:25:59.558755  557955 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:25:59.558873  557955 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:25:59.566213  557955 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:25:59.566498  557955 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:25:59.566705  557955 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:25:59.702785  557955 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:25:59.702911  557955 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:29:59.703742  557955 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00118892s
	I1210 07:29:59.703775  557955 kubeadm.go:319] 
	I1210 07:29:59.703833  557955 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:29:59.703867  557955 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:29:59.703971  557955 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:29:59.703977  557955 kubeadm.go:319] 
	I1210 07:29:59.704081  557955 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:29:59.704113  557955 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:29:59.704143  557955 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:29:59.704147  557955 kubeadm.go:319] 
	I1210 07:29:59.707568  557955 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:29:59.707976  557955 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:29:59.708083  557955 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:29:59.708308  557955 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:29:59.708317  557955 kubeadm.go:319] 
	I1210 07:29:59.708382  557955 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:29:59.708490  557955 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00118892s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00118892s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:29:59.708577  557955 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 07:30:00.298227  557955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:30:00.372296  557955 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:30:00.372378  557955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:30:00.415394  557955 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:30:00.415442  557955 kubeadm.go:158] found existing configuration files:
	
	I1210 07:30:00.415512  557955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:30:00.459916  557955 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:30:00.460045  557955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:30:00.474293  557955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:30:00.490975  557955 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:30:00.491049  557955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:30:00.512451  557955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:30:00.536852  557955 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:30:00.536925  557955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:30:00.558231  557955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:30:00.582756  557955 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:30:00.582836  557955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:30:00.604582  557955 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:30:00.685088  557955 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:30:00.685586  557955 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:30:00.809751  557955 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:30:00.809840  557955 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:30:00.809876  557955 kubeadm.go:319] OS: Linux
	I1210 07:30:00.809927  557955 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:30:00.809983  557955 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:30:00.810030  557955 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:30:00.810078  557955 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:30:00.810127  557955 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:30:00.810189  557955 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:30:00.810243  557955 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:30:00.810297  557955 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:30:00.810343  557955 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:30:00.898186  557955 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:30:00.898309  557955 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:30:00.898404  557955 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:30:00.918384  557955 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:30:00.922413  557955 out.go:252]   - Generating certificates and keys ...
	I1210 07:30:00.922538  557955 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:30:00.922723  557955 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:30:00.923528  557955 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:30:00.924095  557955 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:30:00.925032  557955 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:30:00.926757  557955 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:30:00.927574  557955 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:30:00.928612  557955 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:30:00.929055  557955 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:30:00.934344  557955 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:30:00.935100  557955 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:30:00.935177  557955 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:30:01.136559  557955 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:30:01.369462  557955 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:30:01.501790  557955 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:30:01.899439  557955 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:30:01.966163  557955 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:30:01.966803  557955 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:30:01.969529  557955 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:30:01.973485  557955 out.go:252]   - Booting up control plane ...
	I1210 07:30:01.973591  557955 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:30:01.973675  557955 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:30:01.973753  557955 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:30:01.990083  557955 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:30:01.990193  557955 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:30:01.998755  557955 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:30:01.999127  557955 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:30:01.999175  557955 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:30:02.132024  557955 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:30:02.132159  557955 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:34:02.132628  557955 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000881028s
	I1210 07:34:02.132663  557955 kubeadm.go:319] 
	I1210 07:34:02.132718  557955 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:34:02.132749  557955 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:34:02.132849  557955 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:34:02.132854  557955 kubeadm.go:319] 
	I1210 07:34:02.132953  557955 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:34:02.132983  557955 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:34:02.133012  557955 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:34:02.133017  557955 kubeadm.go:319] 
	I1210 07:34:02.136967  557955 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:34:02.137455  557955 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:34:02.137576  557955 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:34:02.137812  557955 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:34:02.137817  557955 kubeadm.go:319] 
	I1210 07:34:02.137886  557955 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:34:02.137943  557955 kubeadm.go:403] duration metric: took 12m9.315054516s to StartCluster
	I1210 07:34:02.137979  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:34:02.138039  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:34:02.174815  557955 cri.go:89] found id: ""
	I1210 07:34:02.174839  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.174847  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:02.174854  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:34:02.174915  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:34:02.208728  557955 cri.go:89] found id: ""
	I1210 07:34:02.208752  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.208760  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:34:02.208767  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:34:02.208832  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:34:02.236860  557955 cri.go:89] found id: ""
	I1210 07:34:02.236884  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.236893  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:34:02.236899  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:34:02.236958  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:34:02.265401  557955 cri.go:89] found id: ""
	I1210 07:34:02.265424  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.265433  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:02.265444  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:34:02.265506  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:34:02.291955  557955 cri.go:89] found id: ""
	I1210 07:34:02.292035  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.292057  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:02.292078  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:34:02.292168  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:34:02.317793  557955 cri.go:89] found id: ""
	I1210 07:34:02.317829  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.317838  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:02.317858  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:34:02.317943  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:34:02.344043  557955 cri.go:89] found id: ""
	I1210 07:34:02.344125  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.344162  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:02.344192  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:34:02.344286  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:34:02.370559  557955 cri.go:89] found id: ""
	I1210 07:34:02.370585  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.370595  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:34:02.370605  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:02.370641  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:02.387248  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:02.387277  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:02.465343  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:02.465410  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:34:02.465438  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:34:02.512541  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:34:02.513707  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:02.548584  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:02.548608  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:34:02.636863  557955 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000881028s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:34:02.636922  557955 out.go:285] * 
	* 
	W1210 07:34:02.638419  557955 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000881028s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000881028s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:34:02.638516  557955 out.go:285] * 
	* 
	W1210 07:34:02.642858  557955 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:34:02.649671  557955 out.go:203] 
	W1210 07:34:02.653653  557955 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000881028s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000881028s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:34:02.653803  557955 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:34:02.653865  557955 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:34:02.657006  557955 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-943140 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-943140 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-943140 version --output=json: exit status 1 (106.437217ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-10 07:34:03.430931008 +0000 UTC m=+5090.230779478
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-943140
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-943140:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41766d0e52f2a8565d8c1a70105ab0a6f7c9fe5bc3bbbfa5b1f78a4fc1966780",
	        "Created": "2025-12-10T07:20:49.025225249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 558094,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:21:25.633343639Z",
	            "FinishedAt": "2025-12-10T07:21:24.345275211Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/41766d0e52f2a8565d8c1a70105ab0a6f7c9fe5bc3bbbfa5b1f78a4fc1966780/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41766d0e52f2a8565d8c1a70105ab0a6f7c9fe5bc3bbbfa5b1f78a4fc1966780/hostname",
	        "HostsPath": "/var/lib/docker/containers/41766d0e52f2a8565d8c1a70105ab0a6f7c9fe5bc3bbbfa5b1f78a4fc1966780/hosts",
	        "LogPath": "/var/lib/docker/containers/41766d0e52f2a8565d8c1a70105ab0a6f7c9fe5bc3bbbfa5b1f78a4fc1966780/41766d0e52f2a8565d8c1a70105ab0a6f7c9fe5bc3bbbfa5b1f78a4fc1966780-json.log",
	        "Name": "/kubernetes-upgrade-943140",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-943140:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-943140",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41766d0e52f2a8565d8c1a70105ab0a6f7c9fe5bc3bbbfa5b1f78a4fc1966780",
	                "LowerDir": "/var/lib/docker/overlay2/f1404874748c2d7a6c7a5f70f51b0c7f43a5fbf0faf15170ffc1a6c285ea1318-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f1404874748c2d7a6c7a5f70f51b0c7f43a5fbf0faf15170ffc1a6c285ea1318/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f1404874748c2d7a6c7a5f70f51b0c7f43a5fbf0faf15170ffc1a6c285ea1318/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f1404874748c2d7a6c7a5f70f51b0c7f43a5fbf0faf15170ffc1a6c285ea1318/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-943140",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-943140/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-943140",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-943140",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-943140",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "beb1f1e5ddcfc79a89d791f30d6f0b1ae6b04a66bec932b216d93bf16dfc87fb",
	            "SandboxKey": "/var/run/docker/netns/beb1f1e5ddcf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-943140": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:05:8f:ae:6d:da",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7f3fae6d2bc876caf2a82138da59db18703d294ccfb0ac390cd05f05e6989199",
	                    "EndpointID": "4cfffa7a35f9747999088d3fd561efd7345f0054068b421311f2f717558e0441",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-943140",
	                        "41766d0e52f2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-943140 -n kubernetes-upgrade-943140
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-943140 -n kubernetes-upgrade-943140: exit status 2 (388.492308ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-943140 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-957064 sudo systemctl status kubelet --all --full --no-pager                                     │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo systemctl cat kubelet --no-pager                                                     │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo systemctl status docker --all --full --no-pager                                      │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo systemctl cat docker --no-pager                                                      │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo cat /etc/docker/daemon.json                                                          │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo docker system info                                                                   │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo cri-dockerd --version                                                                │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo systemctl cat containerd --no-pager                                                  │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo cat /etc/containerd/config.toml                                                      │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo containerd config dump                                                               │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo systemctl status crio --all --full --no-pager                                        │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo systemctl cat crio --no-pager                                                        │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p cilium-957064 sudo crio config                                                                          │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ delete  │ -p cilium-957064                                                                                           │ cilium-957064            │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ start   │ -p force-systemd-env-925156 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-925156 │ jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:33:57
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:33:57.290062  597851 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:33:57.290286  597851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:33:57.290321  597851 out.go:374] Setting ErrFile to fd 2...
	I1210 07:33:57.290341  597851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:33:57.290617  597851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 07:33:57.291067  597851 out.go:368] Setting JSON to false
	I1210 07:33:57.292030  597851 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15390,"bootTime":1765336648,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 07:33:57.292136  597851 start.go:143] virtualization:  
	I1210 07:33:57.295795  597851 out.go:179] * [force-systemd-env-925156] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:33:57.299636  597851 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:33:57.299770  597851 notify.go:221] Checking for updates...
	I1210 07:33:57.305584  597851 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:33:57.308610  597851 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 07:33:57.311549  597851 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 07:33:57.314559  597851 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:33:57.317533  597851 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1210 07:33:57.321003  597851 config.go:182] Loaded profile config "kubernetes-upgrade-943140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 07:33:57.321129  597851 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:33:57.354418  597851 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:33:57.354551  597851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:33:57.416565  597851 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:33:57.407046826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:33:57.416672  597851 docker.go:319] overlay module found
	I1210 07:33:57.419801  597851 out.go:179] * Using the docker driver based on user configuration
	I1210 07:33:57.422551  597851 start.go:309] selected driver: docker
	I1210 07:33:57.422571  597851 start.go:927] validating driver "docker" against <nil>
	I1210 07:33:57.422586  597851 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:33:57.423347  597851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:33:57.488240  597851 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:33:57.479074988 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:33:57.488402  597851 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:33:57.488623  597851 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 07:33:57.491536  597851 out.go:179] * Using Docker driver with root privileges
	I1210 07:33:57.494409  597851 cni.go:84] Creating CNI manager for ""
	I1210 07:33:57.494480  597851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:33:57.494492  597851 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 07:33:57.494566  597851 start.go:353] cluster config:
	{Name:force-systemd-env-925156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-env-925156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:33:57.499482  597851 out.go:179] * Starting "force-systemd-env-925156" primary control-plane node in "force-systemd-env-925156" cluster
	I1210 07:33:57.502410  597851 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:33:57.505421  597851 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:33:57.508328  597851 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 07:33:57.508384  597851 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:33:57.531115  597851 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:33:57.531140  597851 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:33:57.561855  597851 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 07:33:57.734116  597851 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 07:33:57.734274  597851 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/force-systemd-env-925156/config.json ...
	I1210 07:33:57.734320  597851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/force-systemd-env-925156/config.json: {Name:mk963c0b6d44d1dd2476c9fe8160bef55c1e62b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:33:57.734389  597851 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:33:57.734483  597851 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:33:57.734510  597851 start.go:360] acquireMachinesLock for force-systemd-env-925156: {Name:mk31c6c52d3ae596c6b1d13f987d481c2c824b06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:33:57.734551  597851 start.go:364] duration metric: took 30.713µs to acquireMachinesLock for "force-systemd-env-925156"
	I1210 07:33:57.734569  597851 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-925156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-env-925156 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:33:57.734632  597851 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:33:57.740165  597851 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:33:57.740431  597851 start.go:159] libmachine.API.Create for "force-systemd-env-925156" (driver="docker")
	I1210 07:33:57.740473  597851 client.go:173] LocalClient.Create starting
	I1210 07:33:57.740550  597851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem
	I1210 07:33:57.740588  597851 main.go:143] libmachine: Decoding PEM data...
	I1210 07:33:57.740616  597851 main.go:143] libmachine: Parsing certificate...
	I1210 07:33:57.740673  597851 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem
	I1210 07:33:57.740697  597851 main.go:143] libmachine: Decoding PEM data...
	I1210 07:33:57.740713  597851 main.go:143] libmachine: Parsing certificate...
	I1210 07:33:57.741110  597851 cli_runner.go:164] Run: docker network inspect force-systemd-env-925156 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:33:57.771174  597851 cli_runner.go:211] docker network inspect force-systemd-env-925156 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:33:57.771260  597851 network_create.go:284] running [docker network inspect force-systemd-env-925156] to gather additional debugging logs...
	I1210 07:33:57.771276  597851 cli_runner.go:164] Run: docker network inspect force-systemd-env-925156
	W1210 07:33:57.790300  597851 cli_runner.go:211] docker network inspect force-systemd-env-925156 returned with exit code 1
	I1210 07:33:57.790332  597851 network_create.go:287] error running [docker network inspect force-systemd-env-925156]: docker network inspect force-systemd-env-925156: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-925156 not found
	I1210 07:33:57.790346  597851 network_create.go:289] output of [docker network inspect force-systemd-env-925156]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-925156 not found
	
	** /stderr **
	I1210 07:33:57.790446  597851 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:33:57.827997  597851 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9731135ae282 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:e0:de:21:5b:05} reservation:<nil>}
	I1210 07:33:57.828460  597851 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13224e483db3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:a1:17:90:be:83} reservation:<nil>}
	I1210 07:33:57.828876  597851 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-75aeaca70a0d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:e4:a7:0f:01:e7} reservation:<nil>}
	I1210 07:33:57.829175  597851 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7f3fae6d2bc8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:54:6c:a0:bc:17} reservation:<nil>}
	I1210 07:33:57.829729  597851 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ad0110}
	I1210 07:33:57.829754  597851 network_create.go:124] attempt to create docker network force-systemd-env-925156 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 07:33:57.829821  597851 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-925156 force-systemd-env-925156
	I1210 07:33:57.899397  597851 network_create.go:108] docker network force-systemd-env-925156 192.168.85.0/24 created
	I1210 07:33:57.899452  597851 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-925156" container
	I1210 07:33:57.899543  597851 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:33:57.921161  597851 cli_runner.go:164] Run: docker volume create force-systemd-env-925156 --label name.minikube.sigs.k8s.io=force-systemd-env-925156 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:33:57.939434  597851 oci.go:103] Successfully created a docker volume force-systemd-env-925156
	I1210 07:33:57.939537  597851 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-925156-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-925156 --entrypoint /usr/bin/test -v force-systemd-env-925156:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:33:57.977775  597851 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:33:58.164453  597851 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:33:58.387187  597851 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:33:58.387263  597851 cache.go:107] acquiring lock: {Name:mk02212e897dba66869d457b3bbeea186c9977d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:33:58.387336  597851 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 07:33:58.387344  597851 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3" took 89.864µs
	I1210 07:33:58.387354  597851 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 07:33:58.387364  597851 cache.go:107] acquiring lock: {Name:mkcde84ea8e341b56c14a9da0ddd80f253a2bcdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:33:58.387396  597851 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 07:33:58.387401  597851 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3" took 38.351µs
	I1210 07:33:58.387407  597851 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 07:33:58.387416  597851 cache.go:107] acquiring lock: {Name:mkd358dfd00c757fa5e4489a81c6d55b1de8de5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:33:58.387442  597851 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 07:33:58.387446  597851 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3" took 31.442µs
	I1210 07:33:58.387452  597851 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 07:33:58.387461  597851 cache.go:107] acquiring lock: {Name:mk1e8ea2965a60a26ea6e464eb610a6affff1a11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:33:58.387485  597851 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 07:33:58.387490  597851 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3" took 30.211µs
	I1210 07:33:58.387496  597851 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 07:33:58.387505  597851 cache.go:107] acquiring lock: {Name:mk028ba2317f3b1c037987bf153e02fff8ae3e15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:33:58.387531  597851 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 07:33:58.387535  597851 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 31.573µs
	I1210 07:33:58.387553  597851 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 07:33:58.387563  597851 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:33:58.387589  597851 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:33:58.387594  597851 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.771µs
	I1210 07:33:58.387599  597851 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:33:58.387619  597851 cache.go:107] acquiring lock: {Name:mk528ea302435a8d73a952727ebcf4c5d4bd15a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:33:58.387645  597851 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 07:33:58.387650  597851 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 32.132µs
	I1210 07:33:58.387657  597851 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 07:33:58.387765  597851 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:33:58.387776  597851 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 611.075µs
	I1210 07:33:58.387792  597851 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:33:58.387800  597851 cache.go:87] Successfully saved all images to host disk.
	I1210 07:33:58.450995  597851 oci.go:107] Successfully prepared a docker volume force-systemd-env-925156
	I1210 07:33:58.451054  597851 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	W1210 07:33:58.451189  597851 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 07:33:58.451298  597851 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:33:58.514868  597851 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-925156 --name force-systemd-env-925156 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-925156 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-925156 --network force-systemd-env-925156 --ip 192.168.85.2 --volume force-systemd-env-925156:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:33:58.848230  597851 cli_runner.go:164] Run: docker container inspect force-systemd-env-925156 --format={{.State.Running}}
	I1210 07:33:58.876994  597851 cli_runner.go:164] Run: docker container inspect force-systemd-env-925156 --format={{.State.Status}}
	I1210 07:33:58.901660  597851 cli_runner.go:164] Run: docker exec force-systemd-env-925156 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:33:58.961409  597851 oci.go:144] the created container "force-systemd-env-925156" has a running status.
	I1210 07:33:58.961437  597851 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/force-systemd-env-925156/id_rsa...
	I1210 07:33:59.253130  597851 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/force-systemd-env-925156/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1210 07:33:59.253262  597851 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-362392/.minikube/machines/force-systemd-env-925156/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:33:59.274358  597851 cli_runner.go:164] Run: docker container inspect force-systemd-env-925156 --format={{.State.Status}}
	I1210 07:33:59.293696  597851 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:33:59.293728  597851 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-925156 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:33:59.357749  597851 cli_runner.go:164] Run: docker container inspect force-systemd-env-925156 --format={{.State.Status}}
	I1210 07:33:59.391867  597851 machine.go:94] provisionDockerMachine start ...
	I1210 07:33:59.391966  597851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-925156
	I1210 07:33:59.427052  597851 main.go:143] libmachine: Using SSH client type: native
	I1210 07:33:59.427376  597851 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33416 <nil> <nil>}
	I1210 07:33:59.427385  597851 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:33:59.430484  597851 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:34:02.132628  557955 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000881028s
	I1210 07:34:02.132663  557955 kubeadm.go:319] 
	I1210 07:34:02.132718  557955 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:34:02.132749  557955 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:34:02.132849  557955 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:34:02.132854  557955 kubeadm.go:319] 
	I1210 07:34:02.132953  557955 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:34:02.132983  557955 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:34:02.133012  557955 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:34:02.133017  557955 kubeadm.go:319] 
	I1210 07:34:02.136967  557955 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:34:02.137455  557955 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:34:02.137576  557955 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:34:02.137812  557955 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:34:02.137817  557955 kubeadm.go:319] 
	I1210 07:34:02.137886  557955 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:34:02.137943  557955 kubeadm.go:403] duration metric: took 12m9.315054516s to StartCluster
	I1210 07:34:02.137979  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:34:02.138039  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:34:02.174815  557955 cri.go:89] found id: ""
	I1210 07:34:02.174839  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.174847  557955 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:02.174854  557955 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:34:02.174915  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:34:02.208728  557955 cri.go:89] found id: ""
	I1210 07:34:02.208752  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.208760  557955 logs.go:284] No container was found matching "etcd"
	I1210 07:34:02.208767  557955 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:34:02.208832  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:34:02.236860  557955 cri.go:89] found id: ""
	I1210 07:34:02.236884  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.236893  557955 logs.go:284] No container was found matching "coredns"
	I1210 07:34:02.236899  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:34:02.236958  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:34:02.265401  557955 cri.go:89] found id: ""
	I1210 07:34:02.265424  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.265433  557955 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:02.265444  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:34:02.265506  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:34:02.291955  557955 cri.go:89] found id: ""
	I1210 07:34:02.292035  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.292057  557955 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:02.292078  557955 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:34:02.292168  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:34:02.317793  557955 cri.go:89] found id: ""
	I1210 07:34:02.317829  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.317838  557955 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:02.317858  557955 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:34:02.317943  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:34:02.344043  557955 cri.go:89] found id: ""
	I1210 07:34:02.344125  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.344162  557955 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:02.344192  557955 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:34:02.344286  557955 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:34:02.370559  557955 cri.go:89] found id: ""
	I1210 07:34:02.370585  557955 logs.go:282] 0 containers: []
	W1210 07:34:02.370595  557955 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:34:02.370605  557955 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:02.370641  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:02.387248  557955 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:02.387277  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:02.465343  557955 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:02.465410  557955 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:34:02.465438  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:34:02.512541  557955 logs.go:123] Gathering logs for container status ...
	I1210 07:34:02.513707  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:02.548584  557955 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:02.548608  557955 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:34:02.636863  557955 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000881028s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:34:02.636922  557955 out.go:285] * 
	W1210 07:34:02.638419  557955 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000881028s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:34:02.638516  557955 out.go:285] * 
	W1210 07:34:02.642858  557955 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:34:02.649671  557955 out.go:203] 
	W1210 07:34:02.653653  557955 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000881028s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:34:02.653803  557955 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:34:02.653865  557955 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:34:02.657006  557955 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 07:21:35 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:21:35.52598617Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-apiserver:v1.35.0-rc.1 found" id=8f749574-2785-43ff-a2d7-7d09be005e5c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:21:35 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:21:35.539966088Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=1cefe0d8-d701-4f55-bf39-1520cb53fa06 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:21:35 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:21:35.540266728Z" level=info msg="Image registry.k8s.io/etcd:3.6.6-0 not found" id=1cefe0d8-d701-4f55-bf39-1520cb53fa06 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:21:35 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:21:35.540402467Z" level=info msg="Neither image nor artfiact registry.k8s.io/etcd:3.6.6-0 found" id=1cefe0d8-d701-4f55-bf39-1520cb53fa06 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:21:35 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:21:35.551508661Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=13979d86-9a4f-45fa-a795-fb2d35b8388d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:21:35 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:21:35.55167279Z" level=info msg="Image registry.k8s.io/coredns/coredns:v1.13.1 not found" id=13979d86-9a4f-45fa-a795-fb2d35b8388d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:21:35 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:21:35.551723843Z" level=info msg="Neither image nor artfiact registry.k8s.io/coredns/coredns:v1.13.1 found" id=13979d86-9a4f-45fa-a795-fb2d35b8388d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:21:35 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:21:35.664093226Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=9ed8a065-a1b4-46a1-b52a-64a85304b1ad name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:21:35 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:21:35.664250676Z" level=info msg="Image registry.k8s.io/kube-apiserver:v1.35.0-rc.1 not found" id=9ed8a065-a1b4-46a1-b52a-64a85304b1ad name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:21:35 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:21:35.664307341Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-apiserver:v1.35.0-rc.1 found" id=9ed8a065-a1b4-46a1-b52a-64a85304b1ad name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:21:40 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:21:40.011288718Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=01955ac0-3bbc-485b-af5c-51084ad75434 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:25:56 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:25:56.867639158Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=015101c3-e779-487a-a931-ad2e2d665dab name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:25:56 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:25:56.870713292Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=bc62730e-e489-481f-b13e-a10296f922de name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:25:56 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:25:56.87227765Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=bca25f22-feed-4cc2-a8e9-a0215fc090d1 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:25:56 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:25:56.87381772Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=a5ebc6b8-222d-4f55-8dc6-389dfbd21cfb name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:25:56 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:25:56.874736204Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=a2dd5e86-1b28-4770-953c-54a54244c617 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:25:56 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:25:56.876019992Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=77e07d73-58b5-4fd0-8458-ffb17efe9eb3 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:25:56 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:25:56.876822635Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=956d0381-1ebc-4ee0-9603-39bbb223bcd6 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:30:00 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:30:00.903315526Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-rc.1" id=6d7e0908-7d7a-49c5-95d5-c18001cc917a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:30:00 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:30:00.905824008Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" id=0fa3315d-11d2-412a-9378-cb08669662d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:30:00 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:30:00.907689185Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-rc.1" id=4a943f32-1f6d-402c-92b1-4b11ae2e55b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:30:00 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:30:00.910861789Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=fbe62e3f-b17f-46ec-a912-291bb9bfaeba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:30:00 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:30:00.911969362Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=46182c0f-2fb3-4cac-85d5-6e7293672881 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:30:00 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:30:00.913827654Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=580a2e5c-a964-4a2f-8ede-6a89ecc3abce name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:30:00 kubernetes-upgrade-943140 crio[615]: time="2025-12-10T07:30:00.915818215Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=52ce210b-bf3b-4c90-a180-5bca4abb8913 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 06:58] overlayfs: idmapped layers are currently not supported
	[Dec10 06:59] overlayfs: idmapped layers are currently not supported
	[  +3.762793] overlayfs: idmapped layers are currently not supported
	[ +45.624061] overlayfs: idmapped layers are currently not supported
	[Dec10 07:00] overlayfs: idmapped layers are currently not supported
	[Dec10 07:02] overlayfs: idmapped layers are currently not supported
	[Dec10 07:06] overlayfs: idmapped layers are currently not supported
	[Dec10 07:07] overlayfs: idmapped layers are currently not supported
	[Dec10 07:08] overlayfs: idmapped layers are currently not supported
	[Dec10 07:09] overlayfs: idmapped layers are currently not supported
	[Dec10 07:10] overlayfs: idmapped layers are currently not supported
	[Dec10 07:11] overlayfs: idmapped layers are currently not supported
	[Dec10 07:12] overlayfs: idmapped layers are currently not supported
	[ +13.722126] overlayfs: idmapped layers are currently not supported
	[Dec10 07:13] overlayfs: idmapped layers are currently not supported
	[ +29.922964] overlayfs: idmapped layers are currently not supported
	[Dec10 07:14] overlayfs: idmapped layers are currently not supported
	[ +47.732709] overlayfs: idmapped layers are currently not supported
	[Dec10 07:16] overlayfs: idmapped layers are currently not supported
	[Dec10 07:17] overlayfs: idmapped layers are currently not supported
	[Dec10 07:19] overlayfs: idmapped layers are currently not supported
	[Dec10 07:21] overlayfs: idmapped layers are currently not supported
	[ +28.936234] overlayfs: idmapped layers are currently not supported
	[Dec10 07:31] overlayfs: idmapped layers are currently not supported
	[Dec10 07:33] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:34:04 up  4:16,  0 user,  load average: 3.84, 2.20, 2.04
	Linux kubernetes-upgrade-943140 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:34:02 kubernetes-upgrade-943140 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:34:02 kubernetes-upgrade-943140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 10 07:34:02 kubernetes-upgrade-943140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:02 kubernetes-upgrade-943140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:02 kubernetes-upgrade-943140 kubelet[12895]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:34:02 kubernetes-upgrade-943140 kubelet[12895]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:34:02 kubernetes-upgrade-943140 kubelet[12895]: E1210 07:34:02.958601   12895 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:34:02 kubernetes-upgrade-943140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:34:02 kubernetes-upgrade-943140 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:34:03 kubernetes-upgrade-943140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 10 07:34:03 kubernetes-upgrade-943140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:03 kubernetes-upgrade-943140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:03 kubernetes-upgrade-943140 kubelet[12902]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:34:03 kubernetes-upgrade-943140 kubelet[12902]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:34:03 kubernetes-upgrade-943140 kubelet[12902]: E1210 07:34:03.695220   12902 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:34:03 kubernetes-upgrade-943140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:34:03 kubernetes-upgrade-943140 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:34:04 kubernetes-upgrade-943140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 10 07:34:04 kubernetes-upgrade-943140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:04 kubernetes-upgrade-943140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:04 kubernetes-upgrade-943140 kubelet[12975]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:34:04 kubernetes-upgrade-943140 kubelet[12975]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:34:04 kubernetes-upgrade-943140 kubelet[12975]: E1210 07:34:04.482852   12975 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:34:04 kubernetes-upgrade-943140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:34:04 kubernetes-upgrade-943140 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-943140 -n kubernetes-upgrade-943140
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-943140 -n kubernetes-upgrade-943140: exit status 2 (486.358711ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-943140" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-943140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-943140
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-943140: (2.395154765s)
--- FAIL: TestKubernetesUpgrade (806.50s)

                                                
                                    
x
+
TestPause/serial/Pause (6.53s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-541318 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-541318 --alsologtostderr -v=5: exit status 80 (1.912531037s)

                                                
                                                
-- stdout --
	* Pausing node pause-541318 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:32:51.640029  591823 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:32:51.640583  591823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:32:51.640611  591823 out.go:374] Setting ErrFile to fd 2...
	I1210 07:32:51.640639  591823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:32:51.641069  591823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 07:32:51.641438  591823 out.go:368] Setting JSON to false
	I1210 07:32:51.641493  591823 mustload.go:66] Loading cluster: pause-541318
	I1210 07:32:51.642669  591823 config.go:182] Loaded profile config "pause-541318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:32:51.643218  591823 cli_runner.go:164] Run: docker container inspect pause-541318 --format={{.State.Status}}
	I1210 07:32:51.661357  591823 host.go:66] Checking if "pause-541318" exists ...
	I1210 07:32:51.661707  591823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:32:51.731590  591823 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-10 07:32:51.722217218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:32:51.732226  591823 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-541318 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 07:32:51.735177  591823 out.go:179] * Pausing node pause-541318 ... 
	I1210 07:32:51.738895  591823 host.go:66] Checking if "pause-541318" exists ...
	I1210 07:32:51.739260  591823 ssh_runner.go:195] Run: systemctl --version
	I1210 07:32:51.739320  591823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:51.756108  591823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/pause-541318/id_rsa Username:docker}
	I1210 07:32:51.860416  591823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:51.873807  591823 pause.go:52] kubelet running: true
	I1210 07:32:51.873892  591823 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 07:32:52.089293  591823 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 07:32:52.089388  591823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 07:32:52.167002  591823 cri.go:89] found id: "a614d6b9c168bf531f880dca6ae80a6056d9702163b3dec1ea02beee515b1534"
	I1210 07:32:52.167096  591823 cri.go:89] found id: "6e6dcb614a0b7edbe84d9e9a5dd8ed8cc91ec9977d6e4ed5ed96e569a3bd5406"
	I1210 07:32:52.167125  591823 cri.go:89] found id: "0a3e372b595acf4fbc3c235329f7145b9b000692f6791757d1dccca0b168de1a"
	I1210 07:32:52.167147  591823 cri.go:89] found id: "fbfbc68bf7e73e711c89550613347e9d9f54da7bb77bcad1f3546380c0bd2b16"
	I1210 07:32:52.167182  591823 cri.go:89] found id: "5cb758b2655c025c2ef35bbd2b50a39801df7474cfcd2f5aa41e71e9896a4b44"
	I1210 07:32:52.167207  591823 cri.go:89] found id: "af021044de79f9c1effc01d7a8efa84473646366aa25f04b2662c4e063b180d6"
	I1210 07:32:52.167227  591823 cri.go:89] found id: "b181741de03b3e5f209b1a1b0315ca37b5ee3580f1c791f79562bc7631ceb6bc"
	I1210 07:32:52.167275  591823 cri.go:89] found id: "9fd1e2e6465f7112d9751e5dee9626d5348b62245fee462719c4080844f7e4d2"
	I1210 07:32:52.167293  591823 cri.go:89] found id: "c991b3b2a6538489a7be04598d8f75580e819a42c6822fbb47292427ab43b734"
	I1210 07:32:52.167330  591823 cri.go:89] found id: "f059a046a23b865faa91554e1584660e4fed6a2d6bd2f0926b4791d35e35c13c"
	I1210 07:32:52.167352  591823 cri.go:89] found id: "862d72ac46eb18991fb9881b135e88bd4acae9159fb9f5601bc14e92f27e4546"
	I1210 07:32:52.167372  591823 cri.go:89] found id: "ea9f65ca8057e73829ccfe145b45c0c841718482de948514f1bd5814252609a7"
	I1210 07:32:52.167406  591823 cri.go:89] found id: "6038a6beafc76b1cb53784be90d95d126ec81718edc58b3aac2ebf5aa9eec3c0"
	I1210 07:32:52.167428  591823 cri.go:89] found id: "6659d520d9ed3ad91628947e1ee647d40841956d2e2bf56e55edcd7d14794290"
	I1210 07:32:52.167445  591823 cri.go:89] found id: ""
	I1210 07:32:52.167539  591823 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:32:52.181614  591823 retry.go:31] will retry after 305.08884ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:32:52Z" level=error msg="open /run/runc: no such file or directory"
	I1210 07:32:52.486993  591823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:52.500987  591823 pause.go:52] kubelet running: false
	I1210 07:32:52.501058  591823 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 07:32:52.655610  591823 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 07:32:52.655688  591823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 07:32:52.724471  591823 cri.go:89] found id: "a614d6b9c168bf531f880dca6ae80a6056d9702163b3dec1ea02beee515b1534"
	I1210 07:32:52.724507  591823 cri.go:89] found id: "6e6dcb614a0b7edbe84d9e9a5dd8ed8cc91ec9977d6e4ed5ed96e569a3bd5406"
	I1210 07:32:52.724513  591823 cri.go:89] found id: "0a3e372b595acf4fbc3c235329f7145b9b000692f6791757d1dccca0b168de1a"
	I1210 07:32:52.724517  591823 cri.go:89] found id: "fbfbc68bf7e73e711c89550613347e9d9f54da7bb77bcad1f3546380c0bd2b16"
	I1210 07:32:52.724520  591823 cri.go:89] found id: "5cb758b2655c025c2ef35bbd2b50a39801df7474cfcd2f5aa41e71e9896a4b44"
	I1210 07:32:52.724524  591823 cri.go:89] found id: "af021044de79f9c1effc01d7a8efa84473646366aa25f04b2662c4e063b180d6"
	I1210 07:32:52.724529  591823 cri.go:89] found id: "b181741de03b3e5f209b1a1b0315ca37b5ee3580f1c791f79562bc7631ceb6bc"
	I1210 07:32:52.724531  591823 cri.go:89] found id: "9fd1e2e6465f7112d9751e5dee9626d5348b62245fee462719c4080844f7e4d2"
	I1210 07:32:52.724535  591823 cri.go:89] found id: "c991b3b2a6538489a7be04598d8f75580e819a42c6822fbb47292427ab43b734"
	I1210 07:32:52.724541  591823 cri.go:89] found id: "f059a046a23b865faa91554e1584660e4fed6a2d6bd2f0926b4791d35e35c13c"
	I1210 07:32:52.724547  591823 cri.go:89] found id: "862d72ac46eb18991fb9881b135e88bd4acae9159fb9f5601bc14e92f27e4546"
	I1210 07:32:52.724552  591823 cri.go:89] found id: "ea9f65ca8057e73829ccfe145b45c0c841718482de948514f1bd5814252609a7"
	I1210 07:32:52.724554  591823 cri.go:89] found id: "6038a6beafc76b1cb53784be90d95d126ec81718edc58b3aac2ebf5aa9eec3c0"
	I1210 07:32:52.724557  591823 cri.go:89] found id: "6659d520d9ed3ad91628947e1ee647d40841956d2e2bf56e55edcd7d14794290"
	I1210 07:32:52.724568  591823 cri.go:89] found id: ""
	I1210 07:32:52.724623  591823 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:32:52.736146  591823 retry.go:31] will retry after 495.155385ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:32:52Z" level=error msg="open /run/runc: no such file or directory"
	I1210 07:32:53.231628  591823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:53.245350  591823 pause.go:52] kubelet running: false
	I1210 07:32:53.245428  591823 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 07:32:53.393785  591823 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 07:32:53.393884  591823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 07:32:53.465359  591823 cri.go:89] found id: "a614d6b9c168bf531f880dca6ae80a6056d9702163b3dec1ea02beee515b1534"
	I1210 07:32:53.465384  591823 cri.go:89] found id: "6e6dcb614a0b7edbe84d9e9a5dd8ed8cc91ec9977d6e4ed5ed96e569a3bd5406"
	I1210 07:32:53.465389  591823 cri.go:89] found id: "0a3e372b595acf4fbc3c235329f7145b9b000692f6791757d1dccca0b168de1a"
	I1210 07:32:53.465393  591823 cri.go:89] found id: "fbfbc68bf7e73e711c89550613347e9d9f54da7bb77bcad1f3546380c0bd2b16"
	I1210 07:32:53.465396  591823 cri.go:89] found id: "5cb758b2655c025c2ef35bbd2b50a39801df7474cfcd2f5aa41e71e9896a4b44"
	I1210 07:32:53.465400  591823 cri.go:89] found id: "af021044de79f9c1effc01d7a8efa84473646366aa25f04b2662c4e063b180d6"
	I1210 07:32:53.465414  591823 cri.go:89] found id: "b181741de03b3e5f209b1a1b0315ca37b5ee3580f1c791f79562bc7631ceb6bc"
	I1210 07:32:53.465419  591823 cri.go:89] found id: "9fd1e2e6465f7112d9751e5dee9626d5348b62245fee462719c4080844f7e4d2"
	I1210 07:32:53.465422  591823 cri.go:89] found id: "c991b3b2a6538489a7be04598d8f75580e819a42c6822fbb47292427ab43b734"
	I1210 07:32:53.465428  591823 cri.go:89] found id: "f059a046a23b865faa91554e1584660e4fed6a2d6bd2f0926b4791d35e35c13c"
	I1210 07:32:53.465438  591823 cri.go:89] found id: "862d72ac46eb18991fb9881b135e88bd4acae9159fb9f5601bc14e92f27e4546"
	I1210 07:32:53.465442  591823 cri.go:89] found id: "ea9f65ca8057e73829ccfe145b45c0c841718482de948514f1bd5814252609a7"
	I1210 07:32:53.465448  591823 cri.go:89] found id: "6038a6beafc76b1cb53784be90d95d126ec81718edc58b3aac2ebf5aa9eec3c0"
	I1210 07:32:53.465454  591823 cri.go:89] found id: "6659d520d9ed3ad91628947e1ee647d40841956d2e2bf56e55edcd7d14794290"
	I1210 07:32:53.465460  591823 cri.go:89] found id: ""
	I1210 07:32:53.465539  591823 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:32:53.480861  591823 out.go:203] 
	W1210 07:32:53.483866  591823 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:32:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:32:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:32:53.483914  591823 out.go:285] * 
	* 
	W1210 07:32:53.489601  591823 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:32:53.492592  591823 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-541318 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-541318
helpers_test.go:244: (dbg) docker inspect pause-541318:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f0bcbe42f2ba998e30554849ef45b3d97a8fd220888632d5f0bbbbb5fd7fa06",
	        "Created": "2025-12-10T07:31:21.992197548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 587229,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:31:22.060600424Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/3f0bcbe42f2ba998e30554849ef45b3d97a8fd220888632d5f0bbbbb5fd7fa06/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f0bcbe42f2ba998e30554849ef45b3d97a8fd220888632d5f0bbbbb5fd7fa06/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f0bcbe42f2ba998e30554849ef45b3d97a8fd220888632d5f0bbbbb5fd7fa06/hosts",
	        "LogPath": "/var/lib/docker/containers/3f0bcbe42f2ba998e30554849ef45b3d97a8fd220888632d5f0bbbbb5fd7fa06/3f0bcbe42f2ba998e30554849ef45b3d97a8fd220888632d5f0bbbbb5fd7fa06-json.log",
	        "Name": "/pause-541318",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-541318:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-541318",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3f0bcbe42f2ba998e30554849ef45b3d97a8fd220888632d5f0bbbbb5fd7fa06",
	                "LowerDir": "/var/lib/docker/overlay2/1d601ae63aaa1591dd592682ffdb65f96f3fbb4ac25caa5faa48862b6b1da810-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d601ae63aaa1591dd592682ffdb65f96f3fbb4ac25caa5faa48862b6b1da810/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d601ae63aaa1591dd592682ffdb65f96f3fbb4ac25caa5faa48862b6b1da810/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d601ae63aaa1591dd592682ffdb65f96f3fbb4ac25caa5faa48862b6b1da810/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-541318",
	                "Source": "/var/lib/docker/volumes/pause-541318/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-541318",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-541318",
	                "name.minikube.sigs.k8s.io": "pause-541318",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "efaaf503bff8bd9641dc0531bfd54a7a18646aaae3cb4f6e2f97f31c8e1e489d",
	            "SandboxKey": "/var/run/docker/netns/efaaf503bff8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-541318": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:26:4e:b7:f7:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dd62564ee5be11b79b432ea2b9f00a1416c2797f7a90e123b38feaa45a90fb48",
	                    "EndpointID": "99ef51ad1aa6c0adc0b6c312635df84b9a682a8861755a863389026489e6735e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-541318",
	                        "3f0bcbe42f2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-541318 -n pause-541318
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-541318 -n pause-541318: exit status 2 (369.864529ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-541318 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-541318 logs -n 25: (1.417437662s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-673350 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                         │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:18 UTC │ 10 Dec 25 07:19 UTC │
	│ start   │ -p missing-upgrade-507679 --memory=3072 --driver=docker  --container-runtime=crio                                                             │ missing-upgrade-507679    │ jenkins │ v1.35.0 │ 10 Dec 25 07:18 UTC │ 10 Dec 25 07:20 UTC │
	│ start   │ -p NoKubernetes-673350 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                         │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ start   │ -p missing-upgrade-507679 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ missing-upgrade-507679    │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ delete  │ -p NoKubernetes-673350                                                                                                                        │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ start   │ -p NoKubernetes-673350 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                         │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ ssh     │ -p NoKubernetes-673350 sudo systemctl is-active --quiet service kubelet                                                                       │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │                     │
	│ stop    │ -p NoKubernetes-673350                                                                                                                        │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ start   │ -p NoKubernetes-673350 --driver=docker  --container-runtime=crio                                                                              │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ ssh     │ -p NoKubernetes-673350 sudo systemctl is-active --quiet service kubelet                                                                       │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │                     │
	│ delete  │ -p NoKubernetes-673350                                                                                                                        │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ start   │ -p kubernetes-upgrade-943140 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio      │ kubernetes-upgrade-943140 │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:21 UTC │
	│ delete  │ -p missing-upgrade-507679                                                                                                                     │ missing-upgrade-507679    │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:21 UTC │
	│ start   │ -p stopped-upgrade-051989 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                          │ stopped-upgrade-051989    │ jenkins │ v1.35.0 │ 10 Dec 25 07:21 UTC │ 10 Dec 25 07:21 UTC │
	│ stop    │ -p kubernetes-upgrade-943140                                                                                                                  │ kubernetes-upgrade-943140 │ jenkins │ v1.37.0 │ 10 Dec 25 07:21 UTC │ 10 Dec 25 07:21 UTC │
	│ start   │ -p kubernetes-upgrade-943140 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-943140 │ jenkins │ v1.37.0 │ 10 Dec 25 07:21 UTC │                     │
	│ stop    │ stopped-upgrade-051989 stop                                                                                                                   │ stopped-upgrade-051989    │ jenkins │ v1.35.0 │ 10 Dec 25 07:21 UTC │ 10 Dec 25 07:21 UTC │
	│ start   │ -p stopped-upgrade-051989 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ stopped-upgrade-051989    │ jenkins │ v1.37.0 │ 10 Dec 25 07:21 UTC │ 10 Dec 25 07:26 UTC │
	│ delete  │ -p stopped-upgrade-051989                                                                                                                     │ stopped-upgrade-051989    │ jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ start   │ -p running-upgrade-044448 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                          │ running-upgrade-044448    │ jenkins │ v1.35.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ start   │ -p running-upgrade-044448 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ running-upgrade-044448    │ jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:31 UTC │
	│ delete  │ -p running-upgrade-044448                                                                                                                     │ running-upgrade-044448    │ jenkins │ v1.37.0 │ 10 Dec 25 07:31 UTC │ 10 Dec 25 07:31 UTC │
	│ start   │ -p pause-541318 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                     │ pause-541318              │ jenkins │ v1.37.0 │ 10 Dec 25 07:31 UTC │ 10 Dec 25 07:32 UTC │
	│ start   │ -p pause-541318 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                              │ pause-541318              │ jenkins │ v1.37.0 │ 10 Dec 25 07:32 UTC │ 10 Dec 25 07:32 UTC │
	│ pause   │ -p pause-541318 --alsologtostderr -v=5                                                                                                        │ pause-541318              │ jenkins │ v1.37.0 │ 10 Dec 25 07:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:32:24
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:32:24.141152  590501 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:32:24.141349  590501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:32:24.141361  590501 out.go:374] Setting ErrFile to fd 2...
	I1210 07:32:24.141367  590501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:32:24.142125  590501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 07:32:24.142599  590501 out.go:368] Setting JSON to false
	I1210 07:32:24.143599  590501 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15297,"bootTime":1765336648,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 07:32:24.143677  590501 start.go:143] virtualization:  
	I1210 07:32:24.146659  590501 out.go:179] * [pause-541318] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:32:24.150423  590501 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:32:24.150490  590501 notify.go:221] Checking for updates...
	I1210 07:32:24.156594  590501 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:32:24.159621  590501 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 07:32:24.162631  590501 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 07:32:24.165738  590501 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:32:24.168661  590501 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:32:24.172008  590501 config.go:182] Loaded profile config "pause-541318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:32:24.172709  590501 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:32:24.207617  590501 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:32:24.207760  590501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:32:24.259591  590501 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-10 07:32:24.250404751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:32:24.259705  590501 docker.go:319] overlay module found
	I1210 07:32:24.262980  590501 out.go:179] * Using the docker driver based on existing profile
	I1210 07:32:24.265819  590501 start.go:309] selected driver: docker
	I1210 07:32:24.265890  590501 start.go:927] validating driver "docker" against &{Name:pause-541318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-541318 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:32:24.266035  590501 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:32:24.266145  590501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:32:24.322164  590501 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-10 07:32:24.312052386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:32:24.322586  590501 cni.go:84] Creating CNI manager for ""
	I1210 07:32:24.322649  590501 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:32:24.322695  590501 start.go:353] cluster config:
	{Name:pause-541318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-541318 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:32:24.327743  590501 out.go:179] * Starting "pause-541318" primary control-plane node in "pause-541318" cluster
	I1210 07:32:24.330651  590501 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:32:24.333504  590501 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:32:24.336334  590501 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 07:32:24.336539  590501 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:32:24.357308  590501 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:32:24.357333  590501 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:32:24.401464  590501 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 07:32:24.589808  590501 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 07:32:24.589980  590501 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/config.json ...
	I1210 07:32:24.590132  590501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:32:24.591185  590501 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:32:24.591252  590501 start.go:360] acquireMachinesLock for pause-541318: {Name:mk56902b498d952effced456e7ea808de6ac5fc7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:24.591329  590501 start.go:364] duration metric: took 47.065µs to acquireMachinesLock for "pause-541318"
	I1210 07:32:24.591352  590501 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:32:24.591362  590501 fix.go:54] fixHost starting: 
	I1210 07:32:24.591637  590501 cli_runner.go:164] Run: docker container inspect pause-541318 --format={{.State.Status}}
	I1210 07:32:24.620567  590501 fix.go:112] recreateIfNeeded on pause-541318: state=Running err=<nil>
	W1210 07:32:24.620601  590501 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:32:24.624450  590501 out.go:252] * Updating the running docker "pause-541318" container ...
	I1210 07:32:24.624499  590501 machine.go:94] provisionDockerMachine start ...
	I1210 07:32:24.624587  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:24.643333  590501 main.go:143] libmachine: Using SSH client type: native
	I1210 07:32:24.643673  590501 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1210 07:32:24.643688  590501 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:32:24.759514  590501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:32:24.808906  590501 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-541318
	
	I1210 07:32:24.808930  590501 ubuntu.go:182] provisioning hostname "pause-541318"
	I1210 07:32:24.809082  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:24.845589  590501 main.go:143] libmachine: Using SSH client type: native
	I1210 07:32:24.845918  590501 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1210 07:32:24.845936  590501 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-541318 && echo "pause-541318" | sudo tee /etc/hostname
	I1210 07:32:24.922183  590501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:32:25.020553  590501 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-541318
	
	I1210 07:32:25.020662  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:25.044487  590501 main.go:143] libmachine: Using SSH client type: native
	I1210 07:32:25.044792  590501 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1210 07:32:25.044811  590501 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-541318' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-541318/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-541318' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:32:25.092616  590501 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.092739  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:32:25.092754  590501 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 169.257µs
	I1210 07:32:25.092768  590501 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:32:25.092782  590501 cache.go:107] acquiring lock: {Name:mkcde84ea8e341b56c14a9da0ddd80f253a2bcdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.092823  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 07:32:25.092833  590501 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3" took 52.193µs
	I1210 07:32:25.092839  590501 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 07:32:25.092849  590501 cache.go:107] acquiring lock: {Name:mkd358dfd00c757fa5e4489a81c6d55b1de8de5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.092893  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 07:32:25.092909  590501 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3" took 55.262µs
	I1210 07:32:25.092916  590501 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 07:32:25.092937  590501 cache.go:107] acquiring lock: {Name:mk1e8ea2965a60a26ea6e464eb610a6affff1a11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.092987  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 07:32:25.092997  590501 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3" took 65.281µs
	I1210 07:32:25.093003  590501 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 07:32:25.093013  590501 cache.go:107] acquiring lock: {Name:mk02212e897dba66869d457b3bbeea186c9977d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.093043  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 07:32:25.093052  590501 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3" took 40.674µs
	I1210 07:32:25.093058  590501 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 07:32:25.093068  590501 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.093098  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:32:25.093107  590501 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.198µs
	I1210 07:32:25.093113  590501 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:32:25.093127  590501 cache.go:107] acquiring lock: {Name:mk028ba2317f3b1c037987bf153e02fff8ae3e15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.093159  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 07:32:25.093167  590501 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 46.614µs
	I1210 07:32:25.093173  590501 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 07:32:25.093231  590501 cache.go:107] acquiring lock: {Name:mk528ea302435a8d73a952727ebcf4c5d4bd15a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.093275  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 07:32:25.093285  590501 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 103.902µs
	I1210 07:32:25.093291  590501 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 07:32:25.093309  590501 cache.go:87] Successfully saved all images to host disk.
	I1210 07:32:25.201644  590501 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:32:25.201671  590501 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-362392/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-362392/.minikube}
	I1210 07:32:25.201689  590501 ubuntu.go:190] setting up certificates
	I1210 07:32:25.201710  590501 provision.go:84] configureAuth start
	I1210 07:32:25.201775  590501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-541318
	I1210 07:32:25.220022  590501 provision.go:143] copyHostCerts
	I1210 07:32:25.220097  590501 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem, removing ...
	I1210 07:32:25.220106  590501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 07:32:25.220184  590501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem (1123 bytes)
	I1210 07:32:25.220306  590501 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem, removing ...
	I1210 07:32:25.220313  590501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 07:32:25.220341  590501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem (1675 bytes)
	I1210 07:32:25.220402  590501 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem, removing ...
	I1210 07:32:25.220407  590501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 07:32:25.220431  590501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem (1078 bytes)
	I1210 07:32:25.220487  590501 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem org=jenkins.pause-541318 san=[127.0.0.1 192.168.85.2 localhost minikube pause-541318]
	I1210 07:32:25.634691  590501 provision.go:177] copyRemoteCerts
	I1210 07:32:25.634762  590501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:32:25.634803  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:25.657982  590501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/pause-541318/id_rsa Username:docker}
	I1210 07:32:25.766011  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:32:25.785497  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:32:25.804102  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 07:32:25.822526  590501 provision.go:87] duration metric: took 620.802472ms to configureAuth
	I1210 07:32:25.822555  590501 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:32:25.822787  590501 config.go:182] Loaded profile config "pause-541318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:32:25.822904  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:25.847094  590501 main.go:143] libmachine: Using SSH client type: native
	I1210 07:32:25.847421  590501 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1210 07:32:25.847442  590501 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:32:31.275817  590501 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:32:31.275839  590501 machine.go:97] duration metric: took 6.651331169s to provisionDockerMachine
	I1210 07:32:31.275852  590501 start.go:293] postStartSetup for "pause-541318" (driver="docker")
	I1210 07:32:31.275862  590501 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:32:31.275935  590501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:32:31.275983  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:31.294811  590501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/pause-541318/id_rsa Username:docker}
	I1210 07:32:31.401609  590501 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:32:31.405241  590501 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:32:31.405271  590501 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:32:31.405284  590501 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/addons for local assets ...
	I1210 07:32:31.405340  590501 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/files for local assets ...
	I1210 07:32:31.405427  590501 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> 3642652.pem in /etc/ssl/certs
	I1210 07:32:31.405550  590501 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:32:31.413660  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 07:32:31.432213  590501 start.go:296] duration metric: took 156.345035ms for postStartSetup
	I1210 07:32:31.432298  590501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:32:31.432348  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:31.449685  590501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/pause-541318/id_rsa Username:docker}
	I1210 07:32:31.555015  590501 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:32:31.560484  590501 fix.go:56] duration metric: took 6.969114068s for fixHost
	I1210 07:32:31.560512  590501 start.go:83] releasing machines lock for "pause-541318", held for 6.969167394s
	I1210 07:32:31.560591  590501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-541318
	I1210 07:32:31.578153  590501 ssh_runner.go:195] Run: cat /version.json
	I1210 07:32:31.578210  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:31.578481  590501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:32:31.578543  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:31.596949  590501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/pause-541318/id_rsa Username:docker}
	I1210 07:32:31.603081  590501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/pause-541318/id_rsa Username:docker}
	I1210 07:32:31.701796  590501 ssh_runner.go:195] Run: systemctl --version
	I1210 07:32:31.796850  590501 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:32:31.839893  590501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:32:31.844651  590501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:32:31.844753  590501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:32:31.853770  590501 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:32:31.853795  590501 start.go:496] detecting cgroup driver to use...
	I1210 07:32:31.853828  590501 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:32:31.853876  590501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:32:31.871571  590501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:32:31.891164  590501 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:32:31.891236  590501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:32:31.908631  590501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:32:31.923198  590501 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:32:32.062018  590501 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:32:32.214600  590501 docker.go:234] disabling docker service ...
	I1210 07:32:32.214756  590501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:32:32.233486  590501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:32:32.248645  590501 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:32:32.387739  590501 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:32:32.532589  590501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:32:32.546355  590501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:32:32.560992  590501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:32:32.720549  590501 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:32:32.720630  590501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:32:32.730295  590501 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:32:32.730368  590501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:32:32.740379  590501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:32:32.750010  590501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:32:32.759072  590501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:32:32.767807  590501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:32:32.777399  590501 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:32:32.786701  590501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:32:32.795740  590501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:32:32.804170  590501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:32:32.811841  590501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:32.940077  590501 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:32:33.167449  590501 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:32:33.167539  590501 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:32:33.171453  590501 start.go:564] Will wait 60s for crictl version
	I1210 07:32:33.171546  590501 ssh_runner.go:195] Run: which crictl
	I1210 07:32:33.175138  590501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:32:33.206032  590501 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:32:33.206116  590501 ssh_runner.go:195] Run: crio --version
	I1210 07:32:33.235067  590501 ssh_runner.go:195] Run: crio --version
	I1210 07:32:33.270032  590501 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 07:32:33.273352  590501 cli_runner.go:164] Run: docker network inspect pause-541318 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:32:33.291010  590501 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 07:32:33.295283  590501 kubeadm.go:884] updating cluster {Name:pause-541318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-541318 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:32:33.295512  590501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:32:33.453939  590501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:32:33.600064  590501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:32:33.756108  590501 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 07:32:33.756186  590501 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:32:33.789091  590501 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:32:33.789118  590501 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:32:33.789127  590501 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 crio true true} ...
	I1210 07:32:33.789267  590501 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-541318 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-541318 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:32:33.789354  590501 ssh_runner.go:195] Run: crio config
	I1210 07:32:33.845575  590501 cni.go:84] Creating CNI manager for ""
	I1210 07:32:33.845600  590501 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:32:33.845623  590501 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:32:33.845646  590501 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-541318 NodeName:pause-541318 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:32:33.845786  590501 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-541318"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:32:33.845862  590501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 07:32:33.854286  590501 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:32:33.854366  590501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:32:33.862578  590501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1210 07:32:33.876404  590501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:32:33.890114  590501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1210 07:32:33.904307  590501 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:32:33.908078  590501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:34.051157  590501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:34.064929  590501 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318 for IP: 192.168.85.2
	I1210 07:32:34.064961  590501 certs.go:195] generating shared ca certs ...
	I1210 07:32:34.064978  590501 certs.go:227] acquiring lock for ca certs: {Name:mk6fdc2cadbd147112797261b2432f4e1e90b685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:34.065136  590501 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key
	I1210 07:32:34.065248  590501 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key
	I1210 07:32:34.065261  590501 certs.go:257] generating profile certs ...
	I1210 07:32:34.065370  590501 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/client.key
	I1210 07:32:34.065445  590501 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/apiserver.key.bd9c7a8b
	I1210 07:32:34.065513  590501 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/proxy-client.key
	I1210 07:32:34.065634  590501 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem (1338 bytes)
	W1210 07:32:34.065671  590501 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265_empty.pem, impossibly tiny 0 bytes
	I1210 07:32:34.065688  590501 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 07:32:34.065719  590501 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem (1078 bytes)
	I1210 07:32:34.065746  590501 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:32:34.065780  590501 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem (1675 bytes)
	I1210 07:32:34.065836  590501 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 07:32:34.066432  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:32:34.085645  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:32:34.144262  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:32:34.200845  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:32:34.241445  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 07:32:34.283130  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:32:34.327169  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:32:34.370377  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:32:34.436080  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem --> /usr/share/ca-certificates/364265.pem (1338 bytes)
	I1210 07:32:34.479509  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /usr/share/ca-certificates/3642652.pem (1708 bytes)
	I1210 07:32:34.523014  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:32:34.557700  590501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:32:34.578858  590501 ssh_runner.go:195] Run: openssl version
	I1210 07:32:34.590002  590501 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364265.pem
	I1210 07:32:34.602557  590501 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364265.pem /etc/ssl/certs/364265.pem
	I1210 07:32:34.615190  590501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364265.pem
	I1210 07:32:34.619639  590501 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 07:32:34.619708  590501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364265.pem
	I1210 07:32:34.685446  590501 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:32:34.697623  590501 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3642652.pem
	I1210 07:32:34.709737  590501 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3642652.pem /etc/ssl/certs/3642652.pem
	I1210 07:32:34.722457  590501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3642652.pem
	I1210 07:32:34.731388  590501 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 07:32:34.731457  590501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3642652.pem
	I1210 07:32:34.796412  590501 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:32:34.806444  590501 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:34.814910  590501 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:32:34.825464  590501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:34.830537  590501 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:34.830605  590501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:34.889856  590501 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:32:34.904902  590501 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:32:34.925805  590501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:32:34.987693  590501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:32:35.038757  590501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:32:35.082424  590501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:32:35.134163  590501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:32:35.190797  590501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:32:35.254003  590501 kubeadm.go:401] StartCluster: {Name:pause-541318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-541318 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:32:35.254144  590501 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:32:35.254217  590501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:32:35.308082  590501 cri.go:89] found id: "a614d6b9c168bf531f880dca6ae80a6056d9702163b3dec1ea02beee515b1534"
	I1210 07:32:35.308106  590501 cri.go:89] found id: "6e6dcb614a0b7edbe84d9e9a5dd8ed8cc91ec9977d6e4ed5ed96e569a3bd5406"
	I1210 07:32:35.308112  590501 cri.go:89] found id: "0a3e372b595acf4fbc3c235329f7145b9b000692f6791757d1dccca0b168de1a"
	I1210 07:32:35.308116  590501 cri.go:89] found id: "fbfbc68bf7e73e711c89550613347e9d9f54da7bb77bcad1f3546380c0bd2b16"
	I1210 07:32:35.308119  590501 cri.go:89] found id: "5cb758b2655c025c2ef35bbd2b50a39801df7474cfcd2f5aa41e71e9896a4b44"
	I1210 07:32:35.308122  590501 cri.go:89] found id: "af021044de79f9c1effc01d7a8efa84473646366aa25f04b2662c4e063b180d6"
	I1210 07:32:35.308125  590501 cri.go:89] found id: "b181741de03b3e5f209b1a1b0315ca37b5ee3580f1c791f79562bc7631ceb6bc"
	I1210 07:32:35.308128  590501 cri.go:89] found id: "9fd1e2e6465f7112d9751e5dee9626d5348b62245fee462719c4080844f7e4d2"
	I1210 07:32:35.308131  590501 cri.go:89] found id: "c991b3b2a6538489a7be04598d8f75580e819a42c6822fbb47292427ab43b734"
	I1210 07:32:35.308139  590501 cri.go:89] found id: "f059a046a23b865faa91554e1584660e4fed6a2d6bd2f0926b4791d35e35c13c"
	I1210 07:32:35.308142  590501 cri.go:89] found id: "862d72ac46eb18991fb9881b135e88bd4acae9159fb9f5601bc14e92f27e4546"
	I1210 07:32:35.308146  590501 cri.go:89] found id: "ea9f65ca8057e73829ccfe145b45c0c841718482de948514f1bd5814252609a7"
	I1210 07:32:35.308154  590501 cri.go:89] found id: "6038a6beafc76b1cb53784be90d95d126ec81718edc58b3aac2ebf5aa9eec3c0"
	I1210 07:32:35.308157  590501 cri.go:89] found id: "6659d520d9ed3ad91628947e1ee647d40841956d2e2bf56e55edcd7d14794290"
	I1210 07:32:35.308160  590501 cri.go:89] found id: ""
	I1210 07:32:35.308218  590501 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 07:32:35.328616  590501 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:32:35Z" level=error msg="open /run/runc: no such file or directory"
	I1210 07:32:35.328690  590501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:32:35.343270  590501 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:32:35.343291  590501 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:32:35.343347  590501 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:32:35.359228  590501 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:32:35.359862  590501 kubeconfig.go:125] found "pause-541318" server: "https://192.168.85.2:8443"
	I1210 07:32:35.360686  590501 kapi.go:59] client config for pause-541318: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/client.key", CAFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:32:35.361430  590501 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 07:32:35.361457  590501 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 07:32:35.361538  590501 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 07:32:35.361551  590501 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 07:32:35.361556  590501 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 07:32:35.361843  590501 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:32:35.374910  590501 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 07:32:35.374944  590501 kubeadm.go:602] duration metric: took 31.647486ms to restartPrimaryControlPlane
	I1210 07:32:35.374953  590501 kubeadm.go:403] duration metric: took 120.961076ms to StartCluster
	I1210 07:32:35.374967  590501 settings.go:142] acquiring lock: {Name:mk50f3096e55008942432e93c6cf4976a70e957e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:35.375046  590501 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 07:32:35.375894  590501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/kubeconfig: {Name:mk64e356b1eb31ef8982d6b594101aff5f90a6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:35.376106  590501 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:32:35.376492  590501 config.go:182] Loaded profile config "pause-541318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:32:35.376467  590501 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:32:35.382497  590501 out.go:179] * Enabled addons: 
	I1210 07:32:35.382497  590501 out.go:179] * Verifying Kubernetes components...
	I1210 07:32:35.385267  590501 addons.go:530] duration metric: took 8.807329ms for enable addons: enabled=[]
	I1210 07:32:35.385329  590501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:35.592964  590501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:35.609897  590501 node_ready.go:35] waiting up to 6m0s for node "pause-541318" to be "Ready" ...
	I1210 07:32:38.892374  590501 node_ready.go:49] node "pause-541318" is "Ready"
	I1210 07:32:38.892456  590501 node_ready.go:38] duration metric: took 3.282516007s for node "pause-541318" to be "Ready" ...
	I1210 07:32:38.892497  590501 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:32:38.892595  590501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:38.917733  590501 api_server.go:72] duration metric: took 3.541580598s to wait for apiserver process to appear ...
	I1210 07:32:38.917811  590501 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:32:38.917847  590501 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 07:32:38.959018  590501 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:32:38.959113  590501 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:32:39.418950  590501 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 07:32:39.428358  590501 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:32:39.428390  590501 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:32:39.917933  590501 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 07:32:39.926500  590501 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:32:39.926536  590501 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:32:40.418151  590501 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 07:32:40.426396  590501 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 07:32:40.427532  590501 api_server.go:141] control plane version: v1.34.3
	I1210 07:32:40.427578  590501 api_server.go:131] duration metric: took 1.50974493s to wait for apiserver health ...
	I1210 07:32:40.427590  590501 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:32:40.431819  590501 system_pods.go:59] 7 kube-system pods found
	I1210 07:32:40.431859  590501 system_pods.go:61] "coredns-66bc5c9577-x88t5" [69710b36-71f7-49c0-9c7b-29fce02de488] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:40.431868  590501 system_pods.go:61] "etcd-pause-541318" [eb2f3e54-0dcf-41a7-a0a5-5739a96779cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:32:40.431874  590501 system_pods.go:61] "kindnet-7jvwx" [f25948af-12e6-4f99-b754-991454a2deae] Running
	I1210 07:32:40.431882  590501 system_pods.go:61] "kube-apiserver-pause-541318" [c94148c1-34bf-41a6-ab81-58bebda6d2bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 07:32:40.431890  590501 system_pods.go:61] "kube-controller-manager-pause-541318" [75c235d0-6487-4503-86d2-524e2dde11d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 07:32:40.431894  590501 system_pods.go:61] "kube-proxy-jft5p" [fdc7b9e9-59db-4a4c-b397-8270fdccf52c] Running
	I1210 07:32:40.431901  590501 system_pods.go:61] "kube-scheduler-pause-541318" [dc966193-b6d7-44e9-85b0-116e29153b73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:32:40.431908  590501 system_pods.go:74] duration metric: took 4.312037ms to wait for pod list to return data ...
	I1210 07:32:40.431917  590501 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:32:40.435127  590501 default_sa.go:45] found service account: "default"
	I1210 07:32:40.435157  590501 default_sa.go:55] duration metric: took 3.23274ms for default service account to be created ...
	I1210 07:32:40.435167  590501 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:32:40.438687  590501 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:40.438725  590501 system_pods.go:89] "coredns-66bc5c9577-x88t5" [69710b36-71f7-49c0-9c7b-29fce02de488] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:40.438735  590501 system_pods.go:89] "etcd-pause-541318" [eb2f3e54-0dcf-41a7-a0a5-5739a96779cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:32:40.438741  590501 system_pods.go:89] "kindnet-7jvwx" [f25948af-12e6-4f99-b754-991454a2deae] Running
	I1210 07:32:40.438748  590501 system_pods.go:89] "kube-apiserver-pause-541318" [c94148c1-34bf-41a6-ab81-58bebda6d2bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 07:32:40.438754  590501 system_pods.go:89] "kube-controller-manager-pause-541318" [75c235d0-6487-4503-86d2-524e2dde11d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 07:32:40.438759  590501 system_pods.go:89] "kube-proxy-jft5p" [fdc7b9e9-59db-4a4c-b397-8270fdccf52c] Running
	I1210 07:32:40.438765  590501 system_pods.go:89] "kube-scheduler-pause-541318" [dc966193-b6d7-44e9-85b0-116e29153b73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:32:40.438777  590501 system_pods.go:126] duration metric: took 3.604395ms to wait for k8s-apps to be running ...
	I1210 07:32:40.438786  590501 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:32:40.438849  590501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:40.453658  590501 system_svc.go:56] duration metric: took 14.86267ms WaitForService to wait for kubelet
	I1210 07:32:40.453690  590501 kubeadm.go:587] duration metric: took 5.077552756s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:32:40.453711  590501 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:32:40.457175  590501 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1210 07:32:40.457240  590501 node_conditions.go:123] node cpu capacity is 2
	I1210 07:32:40.457253  590501 node_conditions.go:105] duration metric: took 3.536915ms to run NodePressure ...
	I1210 07:32:40.457267  590501 start.go:242] waiting for startup goroutines ...
	I1210 07:32:40.457279  590501 start.go:247] waiting for cluster config update ...
	I1210 07:32:40.457287  590501 start.go:256] writing updated cluster config ...
	I1210 07:32:40.457667  590501 ssh_runner.go:195] Run: rm -f paused
	I1210 07:32:40.461554  590501 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:40.462190  590501 kapi.go:59] client config for pause-541318: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/client.key", CAFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:32:40.466596  590501 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x88t5" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 07:32:42.475667  590501 pod_ready.go:104] pod "coredns-66bc5c9577-x88t5" is not "Ready", error: <nil>
	W1210 07:32:44.972389  590501 pod_ready.go:104] pod "coredns-66bc5c9577-x88t5" is not "Ready", error: <nil>
	I1210 07:32:46.472958  590501 pod_ready.go:94] pod "coredns-66bc5c9577-x88t5" is "Ready"
	I1210 07:32:46.472989  590501 pod_ready.go:86] duration metric: took 6.006365301s for pod "coredns-66bc5c9577-x88t5" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:46.475806  590501 pod_ready.go:83] waiting for pod "etcd-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:46.480699  590501 pod_ready.go:94] pod "etcd-pause-541318" is "Ready"
	I1210 07:32:46.480722  590501 pod_ready.go:86] duration metric: took 4.892383ms for pod "etcd-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:46.483175  590501 pod_ready.go:83] waiting for pod "kube-apiserver-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 07:32:48.488669  590501 pod_ready.go:104] pod "kube-apiserver-pause-541318" is not "Ready", error: <nil>
	W1210 07:32:50.496501  590501 pod_ready.go:104] pod "kube-apiserver-pause-541318" is not "Ready", error: <nil>
	I1210 07:32:50.989724  590501 pod_ready.go:94] pod "kube-apiserver-pause-541318" is "Ready"
	I1210 07:32:50.989755  590501 pod_ready.go:86] duration metric: took 4.506552433s for pod "kube-apiserver-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.992127  590501 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.996490  590501 pod_ready.go:94] pod "kube-controller-manager-pause-541318" is "Ready"
	I1210 07:32:50.996522  590501 pod_ready.go:86] duration metric: took 4.368989ms for pod "kube-controller-manager-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.998898  590501 pod_ready.go:83] waiting for pod "kube-proxy-jft5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:51.005531  590501 pod_ready.go:94] pod "kube-proxy-jft5p" is "Ready"
	I1210 07:32:51.005564  590501 pod_ready.go:86] duration metric: took 6.638478ms for pod "kube-proxy-jft5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:51.070832  590501 pod_ready.go:83] waiting for pod "kube-scheduler-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:51.470948  590501 pod_ready.go:94] pod "kube-scheduler-pause-541318" is "Ready"
	I1210 07:32:51.470975  590501 pod_ready.go:86] duration metric: took 400.116472ms for pod "kube-scheduler-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:51.470988  590501 pod_ready.go:40] duration metric: took 11.009397394s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:51.536095  590501 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1210 07:32:51.539406  590501 out.go:179] * Done! kubectl is now configured to use "pause-541318" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.291382946Z" level=info msg="Created container 5cb758b2655c025c2ef35bbd2b50a39801df7474cfcd2f5aa41e71e9896a4b44: kube-system/kube-proxy-jft5p/kube-proxy" id=dffea494-689c-4c0e-a723-b8a236d9eeed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.322512289Z" level=info msg="Started container" PID=3025 containerID=af021044de79f9c1effc01d7a8efa84473646366aa25f04b2662c4e063b180d6 description=kube-system/etcd-pause-541318/etcd id=0f75133e-912c-4615-8610-e6d19f5af13d name=/runtime.v1.RuntimeService/StartContainer sandboxID=a7bddda3b7ee196debf18bcd16d5586a30cb2f11ff836c458776ac58fc7abf08
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.324857741Z" level=info msg="Starting container: 5cb758b2655c025c2ef35bbd2b50a39801df7474cfcd2f5aa41e71e9896a4b44" id=ca96e7ce-2eb7-4d80-95a3-83fe4a9a18c7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.346981993Z" level=info msg="Started container" PID=2999 containerID=5cb758b2655c025c2ef35bbd2b50a39801df7474cfcd2f5aa41e71e9896a4b44 description=kube-system/kube-proxy-jft5p/kube-proxy id=ca96e7ce-2eb7-4d80-95a3-83fe4a9a18c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee5086286e5ec1d231e136433108b9fa3d906744d4e81e9a448ef748846258a2
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.411491221Z" level=info msg="Created container 0a3e372b595acf4fbc3c235329f7145b9b000692f6791757d1dccca0b168de1a: kube-system/kindnet-7jvwx/kindnet-cni" id=99c1ce6e-2df5-449c-b957-ac672169a0af name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.415345374Z" level=info msg="Starting container: 0a3e372b595acf4fbc3c235329f7145b9b000692f6791757d1dccca0b168de1a" id=2fdc6b29-f16e-45ab-b58c-fbf0aaac7562 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.429740134Z" level=info msg="Started container" PID=3062 containerID=0a3e372b595acf4fbc3c235329f7145b9b000692f6791757d1dccca0b168de1a description=kube-system/kindnet-7jvwx/kindnet-cni id=2fdc6b29-f16e-45ab-b58c-fbf0aaac7562 name=/runtime.v1.RuntimeService/StartContainer sandboxID=174f25a43a8bd5151eedd3e71996d090cc83e1393b69e727e0e6d3b25f9552b0
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.430373782Z" level=info msg="Created container 6e6dcb614a0b7edbe84d9e9a5dd8ed8cc91ec9977d6e4ed5ed96e569a3bd5406: kube-system/kube-apiserver-pause-541318/kube-apiserver" id=824b5eb9-bc55-4564-913f-af2d40d24d3e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.431105613Z" level=info msg="Starting container: 6e6dcb614a0b7edbe84d9e9a5dd8ed8cc91ec9977d6e4ed5ed96e569a3bd5406" id=39fac706-7940-4189-92f8-c94da8c0475b name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.439184245Z" level=info msg="Started container" PID=3067 containerID=6e6dcb614a0b7edbe84d9e9a5dd8ed8cc91ec9977d6e4ed5ed96e569a3bd5406 description=kube-system/kube-apiserver-pause-541318/kube-apiserver id=39fac706-7940-4189-92f8-c94da8c0475b name=/runtime.v1.RuntimeService/StartContainer sandboxID=bdd1faa328a3de37379b942392aae1d1ae1182c12ab946140301db183a6c2c5d
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.447820224Z" level=info msg="Created container a614d6b9c168bf531f880dca6ae80a6056d9702163b3dec1ea02beee515b1534: kube-system/kube-controller-manager-pause-541318/kube-controller-manager" id=785e5ec4-7178-43bd-9a6b-8a22a445a85b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.457917708Z" level=info msg="Starting container: a614d6b9c168bf531f880dca6ae80a6056d9702163b3dec1ea02beee515b1534" id=6bc62387-f733-4fdf-8926-a02078e57937 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.485668029Z" level=info msg="Started container" PID=3070 containerID=a614d6b9c168bf531f880dca6ae80a6056d9702163b3dec1ea02beee515b1534 description=kube-system/kube-controller-manager-pause-541318/kube-controller-manager id=6bc62387-f733-4fdf-8926-a02078e57937 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c5d96d36007e7c1b4818ce878ce6736871d15c342261055d35fe52351c69b238
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.804498086Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.8081178Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.808277523Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.808312862Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.811400017Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.811433609Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.811458717Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.814583508Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.814624797Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.814648912Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.817767648Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.817803349Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a614d6b9c168b       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                     20 seconds ago       Running             kube-controller-manager   1                   c5d96d36007e7       kube-controller-manager-pause-541318   kube-system
	6e6dcb614a0b7       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                     20 seconds ago       Running             kube-apiserver            1                   bdd1faa328a3d       kube-apiserver-pause-541318            kube-system
	0a3e372b595ac       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                     20 seconds ago       Running             kindnet-cni               1                   174f25a43a8bd       kindnet-7jvwx                          kube-system
	fbfbc68bf7e73       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     20 seconds ago       Running             coredns                   1                   210f9e217335c       coredns-66bc5c9577-x88t5               kube-system
	5cb758b2655c0       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                     20 seconds ago       Running             kube-proxy                1                   ee5086286e5ec       kube-proxy-jft5p                       kube-system
	af021044de79f       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     20 seconds ago       Running             etcd                      1                   a7bddda3b7ee1       etcd-pause-541318                      kube-system
	b181741de03b3       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                     20 seconds ago       Running             kube-scheduler            1                   7bdb592a9968c       kube-scheduler-pause-541318            kube-system
	9fd1e2e6465f7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     33 seconds ago       Exited              coredns                   0                   210f9e217335c       coredns-66bc5c9577-x88t5               kube-system
	c991b3b2a6538       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1   43 seconds ago       Exited              kindnet-cni               0                   174f25a43a8bd       kindnet-7jvwx                          kube-system
	f059a046a23b8       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                     46 seconds ago       Exited              kube-proxy                0                   ee5086286e5ec       kube-proxy-jft5p                       kube-system
	862d72ac46eb1       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                     About a minute ago   Exited              kube-controller-manager   0                   c5d96d36007e7       kube-controller-manager-pause-541318   kube-system
	ea9f65ca8057e       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                     About a minute ago   Exited              kube-apiserver            0                   bdd1faa328a3d       kube-apiserver-pause-541318            kube-system
	6038a6beafc76       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                     About a minute ago   Exited              kube-scheduler            0                   7bdb592a9968c       kube-scheduler-pause-541318            kube-system
	6659d520d9ed3       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     About a minute ago   Exited              etcd                      0                   a7bddda3b7ee1       etcd-pause-541318                      kube-system
	
	
	==> coredns [9fd1e2e6465f7112d9751e5dee9626d5348b62245fee462719c4080844f7e4d2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35093 - 14999 "HINFO IN 251412522390901417.2868722146761040987. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02336086s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fbfbc68bf7e73e711c89550613347e9d9f54da7bb77bcad1f3546380c0bd2b16] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33120 - 44958 "HINFO IN 584909805201989907.1621256258592804063. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012006935s
	
	
	==> describe nodes <==
	Name:               pause-541318
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-541318
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=pause-541318
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T07_32_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 07:31:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-541318
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 07:32:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 07:32:42 +0000   Wed, 10 Dec 2025 07:31:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 07:32:42 +0000   Wed, 10 Dec 2025 07:31:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 07:32:42 +0000   Wed, 10 Dec 2025 07:31:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 07:32:42 +0000   Wed, 10 Dec 2025 07:32:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-541318
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                58d84f2e-0582-43c4-9704-fa23a03eb224
	  Boot ID:                    7e517eb4-cdae-4e97-a158-8132b5e595bf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-x88t5                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     47s
	  kube-system                 etcd-pause-541318                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         52s
	  kube-system                 kindnet-7jvwx                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      47s
	  kube-system                 kube-apiserver-pause-541318             250m (12%)    0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-controller-manager-pause-541318    200m (10%)    0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-proxy-jft5p                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-scheduler-pause-541318             100m (5%)     0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 46s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node pause-541318 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node pause-541318 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node pause-541318 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Normal   Starting                 53s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 53s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  52s                kubelet          Node pause-541318 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    52s                kubelet          Node pause-541318 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     52s                kubelet          Node pause-541318 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                node-controller  Node pause-541318 event: Registered Node pause-541318 in Controller
	  Normal   NodeReady                33s                kubelet          Node pause-541318 status is now: NodeReady
	  Normal   RegisteredNode           12s                node-controller  Node pause-541318 event: Registered Node pause-541318 in Controller
	
	
	==> dmesg <==
	[Dec10 06:57] overlayfs: idmapped layers are currently not supported
	[Dec10 06:58] overlayfs: idmapped layers are currently not supported
	[Dec10 06:59] overlayfs: idmapped layers are currently not supported
	[  +3.762793] overlayfs: idmapped layers are currently not supported
	[ +45.624061] overlayfs: idmapped layers are currently not supported
	[Dec10 07:00] overlayfs: idmapped layers are currently not supported
	[Dec10 07:02] overlayfs: idmapped layers are currently not supported
	[Dec10 07:06] overlayfs: idmapped layers are currently not supported
	[Dec10 07:07] overlayfs: idmapped layers are currently not supported
	[Dec10 07:08] overlayfs: idmapped layers are currently not supported
	[Dec10 07:09] overlayfs: idmapped layers are currently not supported
	[Dec10 07:10] overlayfs: idmapped layers are currently not supported
	[Dec10 07:11] overlayfs: idmapped layers are currently not supported
	[Dec10 07:12] overlayfs: idmapped layers are currently not supported
	[ +13.722126] overlayfs: idmapped layers are currently not supported
	[Dec10 07:13] overlayfs: idmapped layers are currently not supported
	[ +29.922964] overlayfs: idmapped layers are currently not supported
	[Dec10 07:14] overlayfs: idmapped layers are currently not supported
	[ +47.732709] overlayfs: idmapped layers are currently not supported
	[Dec10 07:16] overlayfs: idmapped layers are currently not supported
	[Dec10 07:17] overlayfs: idmapped layers are currently not supported
	[Dec10 07:19] overlayfs: idmapped layers are currently not supported
	[Dec10 07:21] overlayfs: idmapped layers are currently not supported
	[ +28.936234] overlayfs: idmapped layers are currently not supported
	[Dec10 07:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6659d520d9ed3ad91628947e1ee647d40841956d2e2bf56e55edcd7d14794290] <==
	{"level":"warn","ts":"2025-12-10T07:31:57.453025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:31:57.489858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:31:57.500053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:31:57.544948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:31:57.549508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:31:57.576533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:31:57.674487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34442","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T07:32:26.044105Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-10T07:32:26.044151Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-541318","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-10T07:32:26.044248Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T07:32:26.331286Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T07:32:26.331382Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T07:32:26.331422Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-10T07:32:26.331525Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-10T07:32:26.331537Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T07:32:26.331580Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T07:32:26.331607Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T07:32:26.331618Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-10T07:32:26.331589Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T07:32:26.331631Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T07:32:26.331659Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-10T07:32:26.335020Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-10T07:32:26.335100Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T07:32:26.335167Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-10T07:32:26.335197Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-541318","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [af021044de79f9c1effc01d7a8efa84473646366aa25f04b2662c4e063b180d6] <==
	{"level":"warn","ts":"2025-12-10T07:32:37.564700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.578568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.597393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.621706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.634823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.658160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.673297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.701929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.722228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.751670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.773638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.793578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.809718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.821382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.845453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.857422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.905702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.939105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.964427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.987904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:38.006681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:38.053551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:38.055258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:38.076585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:38.132372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44724","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:32:54 up  4:15,  0 user,  load average: 1.77, 1.64, 1.86
	Linux pause-541318 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0a3e372b595acf4fbc3c235329f7145b9b000692f6791757d1dccca0b168de1a] <==
	I1210 07:32:34.533860       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 07:32:34.609456       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 07:32:34.609696       1 main.go:148] setting mtu 1500 for CNI 
	I1210 07:32:34.609724       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 07:32:34.609740       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T07:32:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 07:32:34.803817       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 07:32:34.803848       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 07:32:34.803856       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 07:32:34.804556       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1210 07:32:38.853881       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1210 07:32:38.853993       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 07:32:38.854042       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1210 07:32:38.854113       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1210 07:32:39.904215       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 07:32:39.904254       1 metrics.go:72] Registering metrics
	I1210 07:32:39.904336       1 controller.go:711] "Syncing nftables rules"
	I1210 07:32:44.804034       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 07:32:44.804071       1 main.go:301] handling current node
	I1210 07:32:54.805299       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 07:32:54.805331       1 main.go:301] handling current node
	
	
	==> kindnet [c991b3b2a6538489a7be04598d8f75580e819a42c6822fbb47292427ab43b734] <==
	I1210 07:32:10.610252       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 07:32:10.610696       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 07:32:10.610859       1 main.go:148] setting mtu 1500 for CNI 
	I1210 07:32:10.610881       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 07:32:10.610902       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T07:32:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 07:32:10.902314       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 07:32:10.902442       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 07:32:10.902482       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 07:32:10.904463       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 07:32:11.203192       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 07:32:11.203301       1 metrics.go:72] Registering metrics
	I1210 07:32:11.203778       1 controller.go:711] "Syncing nftables rules"
	I1210 07:32:20.906179       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 07:32:20.906244       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6e6dcb614a0b7edbe84d9e9a5dd8ed8cc91ec9977d6e4ed5ed96e569a3bd5406] <==
	I1210 07:32:38.910249       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 07:32:38.911206       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1210 07:32:38.911507       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1210 07:32:38.931648       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 07:32:38.931768       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1210 07:32:38.931826       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 07:32:38.932143       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 07:32:38.932218       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 07:32:38.933368       1 aggregator.go:171] initial CRD sync complete...
	I1210 07:32:38.933393       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 07:32:38.933401       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 07:32:39.010192       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 07:32:39.011132       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 07:32:39.011341       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 07:32:39.014205       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 07:32:39.014237       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 07:32:39.049355       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 07:32:39.049552       1 cache.go:39] Caches are synced for autoregister controller
	I1210 07:32:39.053459       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 07:32:39.711721       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 07:32:41.030272       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 07:32:42.426059       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 07:32:42.472839       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 07:32:42.666900       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 07:32:42.763988       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [ea9f65ca8057e73829ccfe145b45c0c841718482de948514f1bd5814252609a7] <==
	W1210 07:32:26.073632       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083149       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.073679       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.073729       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.082784       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083300       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083370       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083527       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083588       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083642       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083760       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.082829       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083933       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.082860       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.082891       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.082924       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083120       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084184       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084289       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084484       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084608       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084665       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084770       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084881       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084787       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [862d72ac46eb18991fb9881b135e88bd4acae9159fb9f5601bc14e92f27e4546] <==
	I1210 07:32:06.571599       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 07:32:06.571677       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 07:32:06.576759       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 07:32:06.577177       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 07:32:06.577316       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 07:32:06.577691       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 07:32:06.588976       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 07:32:06.597333       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 07:32:06.603568       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1210 07:32:06.603639       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1210 07:32:06.603659       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1210 07:32:06.603664       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1210 07:32:06.603670       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 07:32:06.615636       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 07:32:06.615659       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 07:32:06.615697       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 07:32:06.617891       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 07:32:06.617909       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 07:32:06.617981       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 07:32:06.618055       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-541318"
	I1210 07:32:06.618096       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1210 07:32:06.618473       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 07:32:06.620143       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 07:32:06.624301       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-541318" podCIDRs=["10.244.0.0/24"]
	I1210 07:32:21.619683       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [a614d6b9c168bf531f880dca6ae80a6056d9702163b3dec1ea02beee515b1534] <==
	I1210 07:32:42.398791       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 07:32:42.407396       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 07:32:42.407474       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 07:32:42.407863       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 07:32:42.407903       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 07:32:42.407927       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 07:32:42.408060       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 07:32:42.408444       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1210 07:32:42.408491       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 07:32:42.410816       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 07:32:42.415525       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 07:32:42.417778       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 07:32:42.421106       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 07:32:42.450286       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 07:32:42.457073       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 07:32:42.457257       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 07:32:42.457313       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 07:32:42.457480       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 07:32:42.457606       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 07:32:42.457326       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 07:32:42.457704       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-541318"
	I1210 07:32:42.457750       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1210 07:32:42.459334       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 07:32:42.459374       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 07:32:42.459382       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [5cb758b2655c025c2ef35bbd2b50a39801df7474cfcd2f5aa41e71e9896a4b44] <==
	I1210 07:32:36.808080       1 server_linux.go:53] "Using iptables proxy"
	I1210 07:32:37.690285       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 07:32:38.994352       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 07:32:38.994399       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1210 07:32:38.994486       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 07:32:39.035642       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 07:32:39.035802       1 server_linux.go:132] "Using iptables Proxier"
	I1210 07:32:39.041172       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 07:32:39.042132       1 server.go:527] "Version info" version="v1.34.3"
	I1210 07:32:39.042390       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 07:32:39.043900       1 config.go:200] "Starting service config controller"
	I1210 07:32:39.043990       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 07:32:39.044038       1 config.go:106] "Starting endpoint slice config controller"
	I1210 07:32:39.044066       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 07:32:39.044103       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 07:32:39.044130       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 07:32:39.044860       1 config.go:309] "Starting node config controller"
	I1210 07:32:39.044924       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 07:32:39.044954       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 07:32:39.144763       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 07:32:39.144768       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 07:32:39.144801       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f059a046a23b865faa91554e1584660e4fed6a2d6bd2f0926b4791d35e35c13c] <==
	I1210 07:32:08.149724       1 server_linux.go:53] "Using iptables proxy"
	I1210 07:32:08.271613       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 07:32:08.384323       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 07:32:08.399946       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1210 07:32:08.400073       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 07:32:08.469820       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 07:32:08.469964       1 server_linux.go:132] "Using iptables Proxier"
	I1210 07:32:08.516093       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 07:32:08.516518       1 server.go:527] "Version info" version="v1.34.3"
	I1210 07:32:08.516726       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 07:32:08.518173       1 config.go:200] "Starting service config controller"
	I1210 07:32:08.518251       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 07:32:08.518297       1 config.go:106] "Starting endpoint slice config controller"
	I1210 07:32:08.518326       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 07:32:08.518362       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 07:32:08.518388       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 07:32:08.519059       1 config.go:309] "Starting node config controller"
	I1210 07:32:08.519122       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 07:32:08.519153       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 07:32:08.618961       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 07:32:08.618994       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 07:32:08.619040       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6038a6beafc76b1cb53784be90d95d126ec81718edc58b3aac2ebf5aa9eec3c0] <==
	E1210 07:31:58.771871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 07:31:58.771927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 07:31:58.772035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 07:31:58.772091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 07:31:58.772130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:31:58.781404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 07:31:59.584762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 07:31:59.596534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 07:31:59.638713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 07:31:59.666301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 07:31:59.773926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 07:31:59.777974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 07:31:59.782681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 07:31:59.829363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 07:31:59.846761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 07:31:59.847936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 07:31:59.895169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 07:31:59.910821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:32:00.085897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1210 07:32:02.863001       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 07:32:26.040371       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1210 07:32:26.040410       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1210 07:32:26.050922       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1210 07:32:26.050969       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1210 07:32:26.050984       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b181741de03b3e5f209b1a1b0315ca37b5ee3580f1c791f79562bc7631ceb6bc] <==
	I1210 07:32:38.814580       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 07:32:38.838113       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 07:32:38.841980       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 07:32:38.842099       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 07:32:38.842154       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1210 07:32:38.871506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 07:32:38.872224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 07:32:38.872279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 07:32:38.872341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 07:32:38.872385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 07:32:38.872419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 07:32:38.872460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 07:32:38.872529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 07:32:38.872570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 07:32:38.871609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 07:32:38.885910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 07:32:38.886073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:32:38.886190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 07:32:38.886279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 07:32:38.886374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 07:32:38.886481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 07:32:38.886576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 07:32:38.886669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 07:32:38.900537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1210 07:32:40.243044       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.157993    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3a1323a644a525348a9dddc54fd3fcdc" pod="kube-system/etcd-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: I1210 07:32:34.167970    2031 scope.go:117] "RemoveContainer" containerID="ea9f65ca8057e73829ccfe145b45c0c841718482de948514f1bd5814252609a7"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.168569    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fb15b064e2720be2ff76387c426d515c" pod="kube-system/kube-scheduler-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.168742    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-7jvwx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f25948af-12e6-4f99-b754-991454a2deae" pod="kube-system/kindnet-7jvwx"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.168891    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jft5p\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fdc7b9e9-59db-4a4c-b397-8270fdccf52c" pod="kube-system/kube-proxy-jft5p"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.169032    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-x88t5\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="69710b36-71f7-49c0-9c7b-29fce02de488" pod="kube-system/coredns-66bc5c9577-x88t5"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.169172    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3a1323a644a525348a9dddc54fd3fcdc" pod="kube-system/etcd-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.169474    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a7c2e775428589d03a18bcbad852b170" pod="kube-system/kube-apiserver-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: I1210 07:32:34.220222    2031 scope.go:117] "RemoveContainer" containerID="862d72ac46eb18991fb9881b135e88bd4acae9159fb9f5601bc14e92f27e4546"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.220750    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jft5p\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fdc7b9e9-59db-4a4c-b397-8270fdccf52c" pod="kube-system/kube-proxy-jft5p"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.220941    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-x88t5\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="69710b36-71f7-49c0-9c7b-29fce02de488" pod="kube-system/coredns-66bc5c9577-x88t5"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.221093    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="12c8eace15e30015d05a1b70d5531924" pod="kube-system/kube-controller-manager-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.221464    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3a1323a644a525348a9dddc54fd3fcdc" pod="kube-system/etcd-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.221671    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a7c2e775428589d03a18bcbad852b170" pod="kube-system/kube-apiserver-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.221838    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fb15b064e2720be2ff76387c426d515c" pod="kube-system/kube-scheduler-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.221997    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-7jvwx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f25948af-12e6-4f99-b754-991454a2deae" pod="kube-system/kindnet-7jvwx"
	Dec 10 07:32:38 pause-541318 kubelet[2031]: E1210 07:32:38.820904    2031 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-541318\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-541318' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 10 07:32:38 pause-541318 kubelet[2031]: E1210 07:32:38.821116    2031 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-541318\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-541318' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 10 07:32:38 pause-541318 kubelet[2031]: E1210 07:32:38.822061    2031 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-541318\" is forbidden: User \"system:node:pause-541318\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-541318' and this object" podUID="fb15b064e2720be2ff76387c426d515c" pod="kube-system/kube-scheduler-pause-541318"
	Dec 10 07:32:38 pause-541318 kubelet[2031]: E1210 07:32:38.823084    2031 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-541318\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-541318' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 10 07:32:38 pause-541318 kubelet[2031]: E1210 07:32:38.856565    2031 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-7jvwx\" is forbidden: User \"system:node:pause-541318\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-541318' and this object" podUID="f25948af-12e6-4f99-b754-991454a2deae" pod="kube-system/kindnet-7jvwx"
	Dec 10 07:32:38 pause-541318 kubelet[2031]: E1210 07:32:38.891241    2031 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-jft5p\" is forbidden: User \"system:node:pause-541318\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-541318' and this object" podUID="fdc7b9e9-59db-4a4c-b397-8270fdccf52c" pod="kube-system/kube-proxy-jft5p"
	Dec 10 07:32:52 pause-541318 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 07:32:52 pause-541318 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 07:32:52 pause-541318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-541318 -n pause-541318
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-541318 -n pause-541318: exit status 2 (358.49522ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-541318 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-541318
helpers_test.go:244: (dbg) docker inspect pause-541318:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f0bcbe42f2ba998e30554849ef45b3d97a8fd220888632d5f0bbbbb5fd7fa06",
	        "Created": "2025-12-10T07:31:21.992197548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 587229,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:31:22.060600424Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/3f0bcbe42f2ba998e30554849ef45b3d97a8fd220888632d5f0bbbbb5fd7fa06/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f0bcbe42f2ba998e30554849ef45b3d97a8fd220888632d5f0bbbbb5fd7fa06/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f0bcbe42f2ba998e30554849ef45b3d97a8fd220888632d5f0bbbbb5fd7fa06/hosts",
	        "LogPath": "/var/lib/docker/containers/3f0bcbe42f2ba998e30554849ef45b3d97a8fd220888632d5f0bbbbb5fd7fa06/3f0bcbe42f2ba998e30554849ef45b3d97a8fd220888632d5f0bbbbb5fd7fa06-json.log",
	        "Name": "/pause-541318",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-541318:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-541318",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3f0bcbe42f2ba998e30554849ef45b3d97a8fd220888632d5f0bbbbb5fd7fa06",
	                "LowerDir": "/var/lib/docker/overlay2/1d601ae63aaa1591dd592682ffdb65f96f3fbb4ac25caa5faa48862b6b1da810-init/diff:/var/lib/docker/overlay2/50f5ee67fcdbf7d15d2907537cedb4fd70b6916e34281a824cc86638f20e8dc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d601ae63aaa1591dd592682ffdb65f96f3fbb4ac25caa5faa48862b6b1da810/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d601ae63aaa1591dd592682ffdb65f96f3fbb4ac25caa5faa48862b6b1da810/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d601ae63aaa1591dd592682ffdb65f96f3fbb4ac25caa5faa48862b6b1da810/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-541318",
	                "Source": "/var/lib/docker/volumes/pause-541318/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-541318",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-541318",
	                "name.minikube.sigs.k8s.io": "pause-541318",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "efaaf503bff8bd9641dc0531bfd54a7a18646aaae3cb4f6e2f97f31c8e1e489d",
	            "SandboxKey": "/var/run/docker/netns/efaaf503bff8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-541318": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:26:4e:b7:f7:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dd62564ee5be11b79b432ea2b9f00a1416c2797f7a90e123b38feaa45a90fb48",
	                    "EndpointID": "99ef51ad1aa6c0adc0b6c312635df84b9a682a8861755a863389026489e6735e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-541318",
	                        "3f0bcbe42f2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-541318 -n pause-541318
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-541318 -n pause-541318: exit status 2 (357.540405ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-541318 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-541318 logs -n 25: (1.402725407s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-673350 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                         │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:18 UTC │ 10 Dec 25 07:19 UTC │
	│ start   │ -p missing-upgrade-507679 --memory=3072 --driver=docker  --container-runtime=crio                                                             │ missing-upgrade-507679    │ jenkins │ v1.35.0 │ 10 Dec 25 07:18 UTC │ 10 Dec 25 07:20 UTC │
	│ start   │ -p NoKubernetes-673350 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                         │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ start   │ -p missing-upgrade-507679 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ missing-upgrade-507679    │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ delete  │ -p NoKubernetes-673350                                                                                                                        │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ start   │ -p NoKubernetes-673350 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                         │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ ssh     │ -p NoKubernetes-673350 sudo systemctl is-active --quiet service kubelet                                                                       │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │                     │
	│ stop    │ -p NoKubernetes-673350                                                                                                                        │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ start   │ -p NoKubernetes-673350 --driver=docker  --container-runtime=crio                                                                              │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ ssh     │ -p NoKubernetes-673350 sudo systemctl is-active --quiet service kubelet                                                                       │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │                     │
	│ delete  │ -p NoKubernetes-673350                                                                                                                        │ NoKubernetes-673350       │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ start   │ -p kubernetes-upgrade-943140 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio      │ kubernetes-upgrade-943140 │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:21 UTC │
	│ delete  │ -p missing-upgrade-507679                                                                                                                     │ missing-upgrade-507679    │ jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:21 UTC │
	│ start   │ -p stopped-upgrade-051989 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                          │ stopped-upgrade-051989    │ jenkins │ v1.35.0 │ 10 Dec 25 07:21 UTC │ 10 Dec 25 07:21 UTC │
	│ stop    │ -p kubernetes-upgrade-943140                                                                                                                  │ kubernetes-upgrade-943140 │ jenkins │ v1.37.0 │ 10 Dec 25 07:21 UTC │ 10 Dec 25 07:21 UTC │
	│ start   │ -p kubernetes-upgrade-943140 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-943140 │ jenkins │ v1.37.0 │ 10 Dec 25 07:21 UTC │                     │
	│ stop    │ stopped-upgrade-051989 stop                                                                                                                   │ stopped-upgrade-051989    │ jenkins │ v1.35.0 │ 10 Dec 25 07:21 UTC │ 10 Dec 25 07:21 UTC │
	│ start   │ -p stopped-upgrade-051989 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ stopped-upgrade-051989    │ jenkins │ v1.37.0 │ 10 Dec 25 07:21 UTC │ 10 Dec 25 07:26 UTC │
	│ delete  │ -p stopped-upgrade-051989                                                                                                                     │ stopped-upgrade-051989    │ jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ start   │ -p running-upgrade-044448 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                          │ running-upgrade-044448    │ jenkins │ v1.35.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ start   │ -p running-upgrade-044448 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ running-upgrade-044448    │ jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:31 UTC │
	│ delete  │ -p running-upgrade-044448                                                                                                                     │ running-upgrade-044448    │ jenkins │ v1.37.0 │ 10 Dec 25 07:31 UTC │ 10 Dec 25 07:31 UTC │
	│ start   │ -p pause-541318 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                     │ pause-541318              │ jenkins │ v1.37.0 │ 10 Dec 25 07:31 UTC │ 10 Dec 25 07:32 UTC │
	│ start   │ -p pause-541318 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                              │ pause-541318              │ jenkins │ v1.37.0 │ 10 Dec 25 07:32 UTC │ 10 Dec 25 07:32 UTC │
	│ pause   │ -p pause-541318 --alsologtostderr -v=5                                                                                                        │ pause-541318              │ jenkins │ v1.37.0 │ 10 Dec 25 07:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:32:24
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:32:24.141152  590501 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:32:24.141349  590501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:32:24.141361  590501 out.go:374] Setting ErrFile to fd 2...
	I1210 07:32:24.141367  590501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:32:24.142125  590501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 07:32:24.142599  590501 out.go:368] Setting JSON to false
	I1210 07:32:24.143599  590501 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15297,"bootTime":1765336648,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 07:32:24.143677  590501 start.go:143] virtualization:  
	I1210 07:32:24.146659  590501 out.go:179] * [pause-541318] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:32:24.150423  590501 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:32:24.150490  590501 notify.go:221] Checking for updates...
	I1210 07:32:24.156594  590501 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:32:24.159621  590501 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 07:32:24.162631  590501 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 07:32:24.165738  590501 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:32:24.168661  590501 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:32:24.172008  590501 config.go:182] Loaded profile config "pause-541318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:32:24.172709  590501 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:32:24.207617  590501 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:32:24.207760  590501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:32:24.259591  590501 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-10 07:32:24.250404751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:32:24.259705  590501 docker.go:319] overlay module found
	I1210 07:32:24.262980  590501 out.go:179] * Using the docker driver based on existing profile
	I1210 07:32:24.265819  590501 start.go:309] selected driver: docker
	I1210 07:32:24.265890  590501 start.go:927] validating driver "docker" against &{Name:pause-541318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-541318 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:32:24.266035  590501 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:32:24.266145  590501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:32:24.322164  590501 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-10 07:32:24.312052386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:32:24.322586  590501 cni.go:84] Creating CNI manager for ""
	I1210 07:32:24.322649  590501 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:32:24.322695  590501 start.go:353] cluster config:
	{Name:pause-541318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-541318 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:32:24.327743  590501 out.go:179] * Starting "pause-541318" primary control-plane node in "pause-541318" cluster
	I1210 07:32:24.330651  590501 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:32:24.333504  590501 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:32:24.336334  590501 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 07:32:24.336539  590501 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:32:24.357308  590501 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:32:24.357333  590501 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:32:24.401464  590501 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 status code: 404
	W1210 07:32:24.589808  590501 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1210 07:32:24.589980  590501 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/config.json ...
	I1210 07:32:24.590132  590501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:32:24.591185  590501 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:32:24.591252  590501 start.go:360] acquireMachinesLock for pause-541318: {Name:mk56902b498d952effced456e7ea808de6ac5fc7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:24.591329  590501 start.go:364] duration metric: took 47.065µs to acquireMachinesLock for "pause-541318"
	I1210 07:32:24.591352  590501 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:32:24.591362  590501 fix.go:54] fixHost starting: 
	I1210 07:32:24.591637  590501 cli_runner.go:164] Run: docker container inspect pause-541318 --format={{.State.Status}}
	I1210 07:32:24.620567  590501 fix.go:112] recreateIfNeeded on pause-541318: state=Running err=<nil>
	W1210 07:32:24.620601  590501 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:32:24.624450  590501 out.go:252] * Updating the running docker "pause-541318" container ...
	I1210 07:32:24.624499  590501 machine.go:94] provisionDockerMachine start ...
	I1210 07:32:24.624587  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:24.643333  590501 main.go:143] libmachine: Using SSH client type: native
	I1210 07:32:24.643673  590501 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1210 07:32:24.643688  590501 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:32:24.759514  590501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:32:24.808906  590501 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-541318
	
	I1210 07:32:24.808930  590501 ubuntu.go:182] provisioning hostname "pause-541318"
	I1210 07:32:24.809082  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:24.845589  590501 main.go:143] libmachine: Using SSH client type: native
	I1210 07:32:24.845918  590501 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1210 07:32:24.845936  590501 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-541318 && echo "pause-541318" | sudo tee /etc/hostname
	I1210 07:32:24.922183  590501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:32:25.020553  590501 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-541318
	
	I1210 07:32:25.020662  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:25.044487  590501 main.go:143] libmachine: Using SSH client type: native
	I1210 07:32:25.044792  590501 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1210 07:32:25.044811  590501 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-541318' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-541318/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-541318' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:32:25.092616  590501 cache.go:107] acquiring lock: {Name:mk0996b0b49684fecae53a62ab260ff9faa25af3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.092739  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:32:25.092754  590501 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 169.257µs
	I1210 07:32:25.092768  590501 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:32:25.092782  590501 cache.go:107] acquiring lock: {Name:mkcde84ea8e341b56c14a9da0ddd80f253a2bcdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.092823  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 07:32:25.092833  590501 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3" took 52.193µs
	I1210 07:32:25.092839  590501 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 07:32:25.092849  590501 cache.go:107] acquiring lock: {Name:mkd358dfd00c757fa5e4489a81c6d55b1de8de5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.092893  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 07:32:25.092909  590501 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3" took 55.262µs
	I1210 07:32:25.092916  590501 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 07:32:25.092937  590501 cache.go:107] acquiring lock: {Name:mk1e8ea2965a60a26ea6e464eb610a6affff1a11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.092987  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 07:32:25.092997  590501 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3" took 65.281µs
	I1210 07:32:25.093003  590501 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 07:32:25.093013  590501 cache.go:107] acquiring lock: {Name:mk02212e897dba66869d457b3bbeea186c9977d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.093043  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 07:32:25.093052  590501 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3" took 40.674µs
	I1210 07:32:25.093058  590501 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 07:32:25.093068  590501 cache.go:107] acquiring lock: {Name:mk898d21a9874899ec3c2b4393e539e74715fb83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.093098  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:32:25.093107  590501 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.198µs
	I1210 07:32:25.093113  590501 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:32:25.093127  590501 cache.go:107] acquiring lock: {Name:mk028ba2317f3b1c037987bf153e02fff8ae3e15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.093159  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 07:32:25.093167  590501 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 46.614µs
	I1210 07:32:25.093173  590501 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 07:32:25.093231  590501 cache.go:107] acquiring lock: {Name:mk528ea302435a8d73a952727ebcf4c5d4bd15a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:32:25.093275  590501 cache.go:115] /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 07:32:25.093285  590501 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 103.902µs
	I1210 07:32:25.093291  590501 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 07:32:25.093309  590501 cache.go:87] Successfully saved all images to host disk.
	I1210 07:32:25.201644  590501 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:32:25.201671  590501 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-362392/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-362392/.minikube}
	I1210 07:32:25.201689  590501 ubuntu.go:190] setting up certificates
	I1210 07:32:25.201710  590501 provision.go:84] configureAuth start
	I1210 07:32:25.201775  590501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-541318
	I1210 07:32:25.220022  590501 provision.go:143] copyHostCerts
	I1210 07:32:25.220097  590501 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem, removing ...
	I1210 07:32:25.220106  590501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem
	I1210 07:32:25.220184  590501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/cert.pem (1123 bytes)
	I1210 07:32:25.220306  590501 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem, removing ...
	I1210 07:32:25.220313  590501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem
	I1210 07:32:25.220341  590501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/key.pem (1675 bytes)
	I1210 07:32:25.220402  590501 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem, removing ...
	I1210 07:32:25.220407  590501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem
	I1210 07:32:25.220431  590501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-362392/.minikube/ca.pem (1078 bytes)
	I1210 07:32:25.220487  590501 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem org=jenkins.pause-541318 san=[127.0.0.1 192.168.85.2 localhost minikube pause-541318]
	I1210 07:32:25.634691  590501 provision.go:177] copyRemoteCerts
	I1210 07:32:25.634762  590501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:32:25.634803  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:25.657982  590501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/pause-541318/id_rsa Username:docker}
	I1210 07:32:25.766011  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:32:25.785497  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:32:25.804102  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 07:32:25.822526  590501 provision.go:87] duration metric: took 620.802472ms to configureAuth
	I1210 07:32:25.822555  590501 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:32:25.822787  590501 config.go:182] Loaded profile config "pause-541318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:32:25.822904  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:25.847094  590501 main.go:143] libmachine: Using SSH client type: native
	I1210 07:32:25.847421  590501 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1210 07:32:25.847442  590501 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:32:31.275817  590501 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:32:31.275839  590501 machine.go:97] duration metric: took 6.651331169s to provisionDockerMachine
	I1210 07:32:31.275852  590501 start.go:293] postStartSetup for "pause-541318" (driver="docker")
	I1210 07:32:31.275862  590501 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:32:31.275935  590501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:32:31.275983  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:31.294811  590501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/pause-541318/id_rsa Username:docker}
	I1210 07:32:31.401609  590501 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:32:31.405241  590501 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:32:31.405271  590501 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:32:31.405284  590501 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/addons for local assets ...
	I1210 07:32:31.405340  590501 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-362392/.minikube/files for local assets ...
	I1210 07:32:31.405427  590501 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem -> 3642652.pem in /etc/ssl/certs
	I1210 07:32:31.405550  590501 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:32:31.413660  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 07:32:31.432213  590501 start.go:296] duration metric: took 156.345035ms for postStartSetup
	I1210 07:32:31.432298  590501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:32:31.432348  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:31.449685  590501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/pause-541318/id_rsa Username:docker}
	I1210 07:32:31.555015  590501 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:32:31.560484  590501 fix.go:56] duration metric: took 6.969114068s for fixHost
	I1210 07:32:31.560512  590501 start.go:83] releasing machines lock for "pause-541318", held for 6.969167394s
	I1210 07:32:31.560591  590501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-541318
	I1210 07:32:31.578153  590501 ssh_runner.go:195] Run: cat /version.json
	I1210 07:32:31.578210  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:31.578481  590501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:32:31.578543  590501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-541318
	I1210 07:32:31.596949  590501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/pause-541318/id_rsa Username:docker}
	I1210 07:32:31.603081  590501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/pause-541318/id_rsa Username:docker}
	I1210 07:32:31.701796  590501 ssh_runner.go:195] Run: systemctl --version
	I1210 07:32:31.796850  590501 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:32:31.839893  590501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:32:31.844651  590501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:32:31.844753  590501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:32:31.853770  590501 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:32:31.853795  590501 start.go:496] detecting cgroup driver to use...
	I1210 07:32:31.853828  590501 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:32:31.853876  590501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:32:31.871571  590501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:32:31.891164  590501 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:32:31.891236  590501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:32:31.908631  590501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:32:31.923198  590501 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:32:32.062018  590501 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:32:32.214600  590501 docker.go:234] disabling docker service ...
	I1210 07:32:32.214756  590501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:32:32.233486  590501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:32:32.248645  590501 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:32:32.387739  590501 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:32:32.532589  590501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:32:32.546355  590501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:32:32.560992  590501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:32:32.720549  590501 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:32:32.720630  590501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:32:32.730295  590501 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:32:32.730368  590501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:32:32.740379  590501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:32:32.750010  590501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:32:32.759072  590501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:32:32.767807  590501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:32:32.777399  590501 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:32:32.786701  590501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:32:32.795740  590501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:32:32.804170  590501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:32:32.811841  590501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:32.940077  590501 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:32:33.167449  590501 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:32:33.167539  590501 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:32:33.171453  590501 start.go:564] Will wait 60s for crictl version
	I1210 07:32:33.171546  590501 ssh_runner.go:195] Run: which crictl
	I1210 07:32:33.175138  590501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:32:33.206032  590501 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:32:33.206116  590501 ssh_runner.go:195] Run: crio --version
	I1210 07:32:33.235067  590501 ssh_runner.go:195] Run: crio --version
	I1210 07:32:33.270032  590501 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1210 07:32:33.273352  590501 cli_runner.go:164] Run: docker network inspect pause-541318 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:32:33.291010  590501 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 07:32:33.295283  590501 kubeadm.go:884] updating cluster {Name:pause-541318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-541318 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:32:33.295512  590501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:32:33.453939  590501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:32:33.600064  590501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:32:33.756108  590501 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1210 07:32:33.756186  590501 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:32:33.789091  590501 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:32:33.789118  590501 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:32:33.789127  590501 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 crio true true} ...
	I1210 07:32:33.789267  590501 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-541318 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-541318 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:32:33.789354  590501 ssh_runner.go:195] Run: crio config
	I1210 07:32:33.845575  590501 cni.go:84] Creating CNI manager for ""
	I1210 07:32:33.845600  590501 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:32:33.845623  590501 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:32:33.845646  590501 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-541318 NodeName:pause-541318 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:32:33.845786  590501 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-541318"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:32:33.845862  590501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 07:32:33.854286  590501 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:32:33.854366  590501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:32:33.862578  590501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1210 07:32:33.876404  590501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:32:33.890114  590501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1210 07:32:33.904307  590501 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:32:33.908078  590501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:34.051157  590501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:34.064929  590501 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318 for IP: 192.168.85.2
	I1210 07:32:34.064961  590501 certs.go:195] generating shared ca certs ...
	I1210 07:32:34.064978  590501 certs.go:227] acquiring lock for ca certs: {Name:mk6fdc2cadbd147112797261b2432f4e1e90b685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:34.065136  590501 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key
	I1210 07:32:34.065248  590501 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key
	I1210 07:32:34.065261  590501 certs.go:257] generating profile certs ...
	I1210 07:32:34.065370  590501 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/client.key
	I1210 07:32:34.065445  590501 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/apiserver.key.bd9c7a8b
	I1210 07:32:34.065513  590501 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/proxy-client.key
	I1210 07:32:34.065634  590501 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem (1338 bytes)
	W1210 07:32:34.065671  590501 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265_empty.pem, impossibly tiny 0 bytes
	I1210 07:32:34.065688  590501 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 07:32:34.065719  590501 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/ca.pem (1078 bytes)
	I1210 07:32:34.065746  590501 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:32:34.065780  590501 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/certs/key.pem (1675 bytes)
	I1210 07:32:34.065836  590501 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem (1708 bytes)
	I1210 07:32:34.066432  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:32:34.085645  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:32:34.144262  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:32:34.200845  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:32:34.241445  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 07:32:34.283130  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:32:34.327169  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:32:34.370377  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:32:34.436080  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/certs/364265.pem --> /usr/share/ca-certificates/364265.pem (1338 bytes)
	I1210 07:32:34.479509  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/ssl/certs/3642652.pem --> /usr/share/ca-certificates/3642652.pem (1708 bytes)
	I1210 07:32:34.523014  590501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:32:34.557700  590501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:32:34.578858  590501 ssh_runner.go:195] Run: openssl version
	I1210 07:32:34.590002  590501 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364265.pem
	I1210 07:32:34.602557  590501 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364265.pem /etc/ssl/certs/364265.pem
	I1210 07:32:34.615190  590501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364265.pem
	I1210 07:32:34.619639  590501 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:19 /usr/share/ca-certificates/364265.pem
	I1210 07:32:34.619708  590501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364265.pem
	I1210 07:32:34.685446  590501 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:32:34.697623  590501 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3642652.pem
	I1210 07:32:34.709737  590501 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3642652.pem /etc/ssl/certs/3642652.pem
	I1210 07:32:34.722457  590501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3642652.pem
	I1210 07:32:34.731388  590501 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:19 /usr/share/ca-certificates/3642652.pem
	I1210 07:32:34.731457  590501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3642652.pem
	I1210 07:32:34.796412  590501 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:32:34.806444  590501 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:34.814910  590501 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:32:34.825464  590501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:34.830537  590501 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:10 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:34.830605  590501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:34.889856  590501 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:32:34.904902  590501 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:32:34.925805  590501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:32:34.987693  590501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:32:35.038757  590501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:32:35.082424  590501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:32:35.134163  590501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:32:35.190797  590501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:32:35.254003  590501 kubeadm.go:401] StartCluster: {Name:pause-541318 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-541318 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:32:35.254144  590501 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:32:35.254217  590501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:32:35.308082  590501 cri.go:89] found id: "a614d6b9c168bf531f880dca6ae80a6056d9702163b3dec1ea02beee515b1534"
	I1210 07:32:35.308106  590501 cri.go:89] found id: "6e6dcb614a0b7edbe84d9e9a5dd8ed8cc91ec9977d6e4ed5ed96e569a3bd5406"
	I1210 07:32:35.308112  590501 cri.go:89] found id: "0a3e372b595acf4fbc3c235329f7145b9b000692f6791757d1dccca0b168de1a"
	I1210 07:32:35.308116  590501 cri.go:89] found id: "fbfbc68bf7e73e711c89550613347e9d9f54da7bb77bcad1f3546380c0bd2b16"
	I1210 07:32:35.308119  590501 cri.go:89] found id: "5cb758b2655c025c2ef35bbd2b50a39801df7474cfcd2f5aa41e71e9896a4b44"
	I1210 07:32:35.308122  590501 cri.go:89] found id: "af021044de79f9c1effc01d7a8efa84473646366aa25f04b2662c4e063b180d6"
	I1210 07:32:35.308125  590501 cri.go:89] found id: "b181741de03b3e5f209b1a1b0315ca37b5ee3580f1c791f79562bc7631ceb6bc"
	I1210 07:32:35.308128  590501 cri.go:89] found id: "9fd1e2e6465f7112d9751e5dee9626d5348b62245fee462719c4080844f7e4d2"
	I1210 07:32:35.308131  590501 cri.go:89] found id: "c991b3b2a6538489a7be04598d8f75580e819a42c6822fbb47292427ab43b734"
	I1210 07:32:35.308139  590501 cri.go:89] found id: "f059a046a23b865faa91554e1584660e4fed6a2d6bd2f0926b4791d35e35c13c"
	I1210 07:32:35.308142  590501 cri.go:89] found id: "862d72ac46eb18991fb9881b135e88bd4acae9159fb9f5601bc14e92f27e4546"
	I1210 07:32:35.308146  590501 cri.go:89] found id: "ea9f65ca8057e73829ccfe145b45c0c841718482de948514f1bd5814252609a7"
	I1210 07:32:35.308154  590501 cri.go:89] found id: "6038a6beafc76b1cb53784be90d95d126ec81718edc58b3aac2ebf5aa9eec3c0"
	I1210 07:32:35.308157  590501 cri.go:89] found id: "6659d520d9ed3ad91628947e1ee647d40841956d2e2bf56e55edcd7d14794290"
	I1210 07:32:35.308160  590501 cri.go:89] found id: ""
	I1210 07:32:35.308218  590501 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 07:32:35.328616  590501 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:32:35Z" level=error msg="open /run/runc: no such file or directory"
	I1210 07:32:35.328690  590501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:32:35.343270  590501 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:32:35.343291  590501 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:32:35.343347  590501 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:32:35.359228  590501 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:32:35.359862  590501 kubeconfig.go:125] found "pause-541318" server: "https://192.168.85.2:8443"
	I1210 07:32:35.360686  590501 kapi.go:59] client config for pause-541318: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/client.key", CAFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:32:35.361430  590501 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 07:32:35.361457  590501 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 07:32:35.361538  590501 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 07:32:35.361551  590501 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 07:32:35.361556  590501 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 07:32:35.361843  590501 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:32:35.374910  590501 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 07:32:35.374944  590501 kubeadm.go:602] duration metric: took 31.647486ms to restartPrimaryControlPlane
	I1210 07:32:35.374953  590501 kubeadm.go:403] duration metric: took 120.961076ms to StartCluster
	I1210 07:32:35.374967  590501 settings.go:142] acquiring lock: {Name:mk50f3096e55008942432e93c6cf4976a70e957e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:35.375046  590501 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 07:32:35.375894  590501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-362392/kubeconfig: {Name:mk64e356b1eb31ef8982d6b594101aff5f90a6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:35.376106  590501 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:32:35.376492  590501 config.go:182] Loaded profile config "pause-541318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:32:35.376467  590501 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:32:35.382497  590501 out.go:179] * Enabled addons: 
	I1210 07:32:35.382497  590501 out.go:179] * Verifying Kubernetes components...
	I1210 07:32:35.385267  590501 addons.go:530] duration metric: took 8.807329ms for enable addons: enabled=[]
	I1210 07:32:35.385329  590501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:35.592964  590501 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:35.609897  590501 node_ready.go:35] waiting up to 6m0s for node "pause-541318" to be "Ready" ...
	I1210 07:32:38.892374  590501 node_ready.go:49] node "pause-541318" is "Ready"
	I1210 07:32:38.892456  590501 node_ready.go:38] duration metric: took 3.282516007s for node "pause-541318" to be "Ready" ...
	I1210 07:32:38.892497  590501 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:32:38.892595  590501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:38.917733  590501 api_server.go:72] duration metric: took 3.541580598s to wait for apiserver process to appear ...
	I1210 07:32:38.917811  590501 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:32:38.917847  590501 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 07:32:38.959018  590501 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:32:38.959113  590501 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:32:39.418950  590501 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 07:32:39.428358  590501 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:32:39.428390  590501 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:32:39.917933  590501 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 07:32:39.926500  590501 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 07:32:39.926536  590501 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 07:32:40.418151  590501 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 07:32:40.426396  590501 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 07:32:40.427532  590501 api_server.go:141] control plane version: v1.34.3
	I1210 07:32:40.427578  590501 api_server.go:131] duration metric: took 1.50974493s to wait for apiserver health ...
	I1210 07:32:40.427590  590501 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:32:40.431819  590501 system_pods.go:59] 7 kube-system pods found
	I1210 07:32:40.431859  590501 system_pods.go:61] "coredns-66bc5c9577-x88t5" [69710b36-71f7-49c0-9c7b-29fce02de488] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:40.431868  590501 system_pods.go:61] "etcd-pause-541318" [eb2f3e54-0dcf-41a7-a0a5-5739a96779cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:32:40.431874  590501 system_pods.go:61] "kindnet-7jvwx" [f25948af-12e6-4f99-b754-991454a2deae] Running
	I1210 07:32:40.431882  590501 system_pods.go:61] "kube-apiserver-pause-541318" [c94148c1-34bf-41a6-ab81-58bebda6d2bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 07:32:40.431890  590501 system_pods.go:61] "kube-controller-manager-pause-541318" [75c235d0-6487-4503-86d2-524e2dde11d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 07:32:40.431894  590501 system_pods.go:61] "kube-proxy-jft5p" [fdc7b9e9-59db-4a4c-b397-8270fdccf52c] Running
	I1210 07:32:40.431901  590501 system_pods.go:61] "kube-scheduler-pause-541318" [dc966193-b6d7-44e9-85b0-116e29153b73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:32:40.431908  590501 system_pods.go:74] duration metric: took 4.312037ms to wait for pod list to return data ...
	I1210 07:32:40.431917  590501 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:32:40.435127  590501 default_sa.go:45] found service account: "default"
	I1210 07:32:40.435157  590501 default_sa.go:55] duration metric: took 3.23274ms for default service account to be created ...
	I1210 07:32:40.435167  590501 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:32:40.438687  590501 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:40.438725  590501 system_pods.go:89] "coredns-66bc5c9577-x88t5" [69710b36-71f7-49c0-9c7b-29fce02de488] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:40.438735  590501 system_pods.go:89] "etcd-pause-541318" [eb2f3e54-0dcf-41a7-a0a5-5739a96779cd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:32:40.438741  590501 system_pods.go:89] "kindnet-7jvwx" [f25948af-12e6-4f99-b754-991454a2deae] Running
	I1210 07:32:40.438748  590501 system_pods.go:89] "kube-apiserver-pause-541318" [c94148c1-34bf-41a6-ab81-58bebda6d2bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 07:32:40.438754  590501 system_pods.go:89] "kube-controller-manager-pause-541318" [75c235d0-6487-4503-86d2-524e2dde11d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 07:32:40.438759  590501 system_pods.go:89] "kube-proxy-jft5p" [fdc7b9e9-59db-4a4c-b397-8270fdccf52c] Running
	I1210 07:32:40.438765  590501 system_pods.go:89] "kube-scheduler-pause-541318" [dc966193-b6d7-44e9-85b0-116e29153b73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:32:40.438777  590501 system_pods.go:126] duration metric: took 3.604395ms to wait for k8s-apps to be running ...
	I1210 07:32:40.438786  590501 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:32:40.438849  590501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:40.453658  590501 system_svc.go:56] duration metric: took 14.86267ms WaitForService to wait for kubelet
	I1210 07:32:40.453690  590501 kubeadm.go:587] duration metric: took 5.077552756s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:32:40.453711  590501 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:32:40.457175  590501 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1210 07:32:40.457240  590501 node_conditions.go:123] node cpu capacity is 2
	I1210 07:32:40.457253  590501 node_conditions.go:105] duration metric: took 3.536915ms to run NodePressure ...
	I1210 07:32:40.457267  590501 start.go:242] waiting for startup goroutines ...
	I1210 07:32:40.457279  590501 start.go:247] waiting for cluster config update ...
	I1210 07:32:40.457287  590501 start.go:256] writing updated cluster config ...
	I1210 07:32:40.457667  590501 ssh_runner.go:195] Run: rm -f paused
	I1210 07:32:40.461554  590501 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:40.462190  590501 kapi.go:59] client config for pause-541318: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/profiles/pause-541318/client.key", CAFile:"/home/jenkins/minikube-integration/22094-362392/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:32:40.466596  590501 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x88t5" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 07:32:42.475667  590501 pod_ready.go:104] pod "coredns-66bc5c9577-x88t5" is not "Ready", error: <nil>
	W1210 07:32:44.972389  590501 pod_ready.go:104] pod "coredns-66bc5c9577-x88t5" is not "Ready", error: <nil>
	I1210 07:32:46.472958  590501 pod_ready.go:94] pod "coredns-66bc5c9577-x88t5" is "Ready"
	I1210 07:32:46.472989  590501 pod_ready.go:86] duration metric: took 6.006365301s for pod "coredns-66bc5c9577-x88t5" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:46.475806  590501 pod_ready.go:83] waiting for pod "etcd-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:46.480699  590501 pod_ready.go:94] pod "etcd-pause-541318" is "Ready"
	I1210 07:32:46.480722  590501 pod_ready.go:86] duration metric: took 4.892383ms for pod "etcd-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:46.483175  590501 pod_ready.go:83] waiting for pod "kube-apiserver-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 07:32:48.488669  590501 pod_ready.go:104] pod "kube-apiserver-pause-541318" is not "Ready", error: <nil>
	W1210 07:32:50.496501  590501 pod_ready.go:104] pod "kube-apiserver-pause-541318" is not "Ready", error: <nil>
	I1210 07:32:50.989724  590501 pod_ready.go:94] pod "kube-apiserver-pause-541318" is "Ready"
	I1210 07:32:50.989755  590501 pod_ready.go:86] duration metric: took 4.506552433s for pod "kube-apiserver-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.992127  590501 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.996490  590501 pod_ready.go:94] pod "kube-controller-manager-pause-541318" is "Ready"
	I1210 07:32:50.996522  590501 pod_ready.go:86] duration metric: took 4.368989ms for pod "kube-controller-manager-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.998898  590501 pod_ready.go:83] waiting for pod "kube-proxy-jft5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:51.005531  590501 pod_ready.go:94] pod "kube-proxy-jft5p" is "Ready"
	I1210 07:32:51.005564  590501 pod_ready.go:86] duration metric: took 6.638478ms for pod "kube-proxy-jft5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:51.070832  590501 pod_ready.go:83] waiting for pod "kube-scheduler-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:51.470948  590501 pod_ready.go:94] pod "kube-scheduler-pause-541318" is "Ready"
	I1210 07:32:51.470975  590501 pod_ready.go:86] duration metric: took 400.116472ms for pod "kube-scheduler-pause-541318" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:51.470988  590501 pod_ready.go:40] duration metric: took 11.009397394s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:51.536095  590501 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1210 07:32:51.539406  590501 out.go:179] * Done! kubectl is now configured to use "pause-541318" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.291382946Z" level=info msg="Created container 5cb758b2655c025c2ef35bbd2b50a39801df7474cfcd2f5aa41e71e9896a4b44: kube-system/kube-proxy-jft5p/kube-proxy" id=dffea494-689c-4c0e-a723-b8a236d9eeed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.322512289Z" level=info msg="Started container" PID=3025 containerID=af021044de79f9c1effc01d7a8efa84473646366aa25f04b2662c4e063b180d6 description=kube-system/etcd-pause-541318/etcd id=0f75133e-912c-4615-8610-e6d19f5af13d name=/runtime.v1.RuntimeService/StartContainer sandboxID=a7bddda3b7ee196debf18bcd16d5586a30cb2f11ff836c458776ac58fc7abf08
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.324857741Z" level=info msg="Starting container: 5cb758b2655c025c2ef35bbd2b50a39801df7474cfcd2f5aa41e71e9896a4b44" id=ca96e7ce-2eb7-4d80-95a3-83fe4a9a18c7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.346981993Z" level=info msg="Started container" PID=2999 containerID=5cb758b2655c025c2ef35bbd2b50a39801df7474cfcd2f5aa41e71e9896a4b44 description=kube-system/kube-proxy-jft5p/kube-proxy id=ca96e7ce-2eb7-4d80-95a3-83fe4a9a18c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee5086286e5ec1d231e136433108b9fa3d906744d4e81e9a448ef748846258a2
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.411491221Z" level=info msg="Created container 0a3e372b595acf4fbc3c235329f7145b9b000692f6791757d1dccca0b168de1a: kube-system/kindnet-7jvwx/kindnet-cni" id=99c1ce6e-2df5-449c-b957-ac672169a0af name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.415345374Z" level=info msg="Starting container: 0a3e372b595acf4fbc3c235329f7145b9b000692f6791757d1dccca0b168de1a" id=2fdc6b29-f16e-45ab-b58c-fbf0aaac7562 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.429740134Z" level=info msg="Started container" PID=3062 containerID=0a3e372b595acf4fbc3c235329f7145b9b000692f6791757d1dccca0b168de1a description=kube-system/kindnet-7jvwx/kindnet-cni id=2fdc6b29-f16e-45ab-b58c-fbf0aaac7562 name=/runtime.v1.RuntimeService/StartContainer sandboxID=174f25a43a8bd5151eedd3e71996d090cc83e1393b69e727e0e6d3b25f9552b0
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.430373782Z" level=info msg="Created container 6e6dcb614a0b7edbe84d9e9a5dd8ed8cc91ec9977d6e4ed5ed96e569a3bd5406: kube-system/kube-apiserver-pause-541318/kube-apiserver" id=824b5eb9-bc55-4564-913f-af2d40d24d3e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.431105613Z" level=info msg="Starting container: 6e6dcb614a0b7edbe84d9e9a5dd8ed8cc91ec9977d6e4ed5ed96e569a3bd5406" id=39fac706-7940-4189-92f8-c94da8c0475b name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.439184245Z" level=info msg="Started container" PID=3067 containerID=6e6dcb614a0b7edbe84d9e9a5dd8ed8cc91ec9977d6e4ed5ed96e569a3bd5406 description=kube-system/kube-apiserver-pause-541318/kube-apiserver id=39fac706-7940-4189-92f8-c94da8c0475b name=/runtime.v1.RuntimeService/StartContainer sandboxID=bdd1faa328a3de37379b942392aae1d1ae1182c12ab946140301db183a6c2c5d
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.447820224Z" level=info msg="Created container a614d6b9c168bf531f880dca6ae80a6056d9702163b3dec1ea02beee515b1534: kube-system/kube-controller-manager-pause-541318/kube-controller-manager" id=785e5ec4-7178-43bd-9a6b-8a22a445a85b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.457917708Z" level=info msg="Starting container: a614d6b9c168bf531f880dca6ae80a6056d9702163b3dec1ea02beee515b1534" id=6bc62387-f733-4fdf-8926-a02078e57937 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 07:32:34 pause-541318 crio[2852]: time="2025-12-10T07:32:34.485668029Z" level=info msg="Started container" PID=3070 containerID=a614d6b9c168bf531f880dca6ae80a6056d9702163b3dec1ea02beee515b1534 description=kube-system/kube-controller-manager-pause-541318/kube-controller-manager id=6bc62387-f733-4fdf-8926-a02078e57937 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c5d96d36007e7c1b4818ce878ce6736871d15c342261055d35fe52351c69b238
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.804498086Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.8081178Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.808277523Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.808312862Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.811400017Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.811433609Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.811458717Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.814583508Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.814624797Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.814648912Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.817767648Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 07:32:44 pause-541318 crio[2852]: time="2025-12-10T07:32:44.817803349Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a614d6b9c168b       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                     22 seconds ago       Running             kube-controller-manager   1                   c5d96d36007e7       kube-controller-manager-pause-541318   kube-system
	6e6dcb614a0b7       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                     22 seconds ago       Running             kube-apiserver            1                   bdd1faa328a3d       kube-apiserver-pause-541318            kube-system
	0a3e372b595ac       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                     22 seconds ago       Running             kindnet-cni               1                   174f25a43a8bd       kindnet-7jvwx                          kube-system
	fbfbc68bf7e73       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     22 seconds ago       Running             coredns                   1                   210f9e217335c       coredns-66bc5c9577-x88t5               kube-system
	5cb758b2655c0       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                     22 seconds ago       Running             kube-proxy                1                   ee5086286e5ec       kube-proxy-jft5p                       kube-system
	af021044de79f       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     22 seconds ago       Running             etcd                      1                   a7bddda3b7ee1       etcd-pause-541318                      kube-system
	b181741de03b3       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                     22 seconds ago       Running             kube-scheduler            1                   7bdb592a9968c       kube-scheduler-pause-541318            kube-system
	9fd1e2e6465f7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     35 seconds ago       Exited              coredns                   0                   210f9e217335c       coredns-66bc5c9577-x88t5               kube-system
	c991b3b2a6538       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1   46 seconds ago       Exited              kindnet-cni               0                   174f25a43a8bd       kindnet-7jvwx                          kube-system
	f059a046a23b8       4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162                                     49 seconds ago       Exited              kube-proxy                0                   ee5086286e5ec       kube-proxy-jft5p                       kube-system
	862d72ac46eb1       7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22                                     About a minute ago   Exited              kube-controller-manager   0                   c5d96d36007e7       kube-controller-manager-pause-541318   kube-system
	ea9f65ca8057e       cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896                                     About a minute ago   Exited              kube-apiserver            0                   bdd1faa328a3d       kube-apiserver-pause-541318            kube-system
	6038a6beafc76       2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6                                     About a minute ago   Exited              kube-scheduler            0                   7bdb592a9968c       kube-scheduler-pause-541318            kube-system
	6659d520d9ed3       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     About a minute ago   Exited              etcd                      0                   a7bddda3b7ee1       etcd-pause-541318                      kube-system
	
	
	==> coredns [9fd1e2e6465f7112d9751e5dee9626d5348b62245fee462719c4080844f7e4d2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35093 - 14999 "HINFO IN 251412522390901417.2868722146761040987. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02336086s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fbfbc68bf7e73e711c89550613347e9d9f54da7bb77bcad1f3546380c0bd2b16] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33120 - 44958 "HINFO IN 584909805201989907.1621256258592804063. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012006935s
	
	
	==> describe nodes <==
	Name:               pause-541318
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-541318
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602
	                    minikube.k8s.io/name=pause-541318
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T07_32_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 07:31:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-541318
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 07:32:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 07:32:42 +0000   Wed, 10 Dec 2025 07:31:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 07:32:42 +0000   Wed, 10 Dec 2025 07:31:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 07:32:42 +0000   Wed, 10 Dec 2025 07:31:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 07:32:42 +0000   Wed, 10 Dec 2025 07:32:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-541318
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                58d84f2e-0582-43c4-9704-fa23a03eb224
	  Boot ID:                    7e517eb4-cdae-4e97-a158-8132b5e595bf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-x88t5                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     50s
	  kube-system                 etcd-pause-541318                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         55s
	  kube-system                 kindnet-7jvwx                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      50s
	  kube-system                 kube-apiserver-pause-541318             250m (12%)    0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-controller-manager-pause-541318    200m (10%)    0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-proxy-jft5p                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-scheduler-pause-541318             100m (5%)     0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 48s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)  kubelet          Node pause-541318 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node pause-541318 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node pause-541318 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Normal   Starting                 56s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 56s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  55s                kubelet          Node pause-541318 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s                kubelet          Node pause-541318 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s                kubelet          Node pause-541318 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                node-controller  Node pause-541318 event: Registered Node pause-541318 in Controller
	  Normal   NodeReady                36s                kubelet          Node pause-541318 status is now: NodeReady
	  Normal   RegisteredNode           15s                node-controller  Node pause-541318 event: Registered Node pause-541318 in Controller
	
	
	==> dmesg <==
	[Dec10 06:57] overlayfs: idmapped layers are currently not supported
	[Dec10 06:58] overlayfs: idmapped layers are currently not supported
	[Dec10 06:59] overlayfs: idmapped layers are currently not supported
	[  +3.762793] overlayfs: idmapped layers are currently not supported
	[ +45.624061] overlayfs: idmapped layers are currently not supported
	[Dec10 07:00] overlayfs: idmapped layers are currently not supported
	[Dec10 07:02] overlayfs: idmapped layers are currently not supported
	[Dec10 07:06] overlayfs: idmapped layers are currently not supported
	[Dec10 07:07] overlayfs: idmapped layers are currently not supported
	[Dec10 07:08] overlayfs: idmapped layers are currently not supported
	[Dec10 07:09] overlayfs: idmapped layers are currently not supported
	[Dec10 07:10] overlayfs: idmapped layers are currently not supported
	[Dec10 07:11] overlayfs: idmapped layers are currently not supported
	[Dec10 07:12] overlayfs: idmapped layers are currently not supported
	[ +13.722126] overlayfs: idmapped layers are currently not supported
	[Dec10 07:13] overlayfs: idmapped layers are currently not supported
	[ +29.922964] overlayfs: idmapped layers are currently not supported
	[Dec10 07:14] overlayfs: idmapped layers are currently not supported
	[ +47.732709] overlayfs: idmapped layers are currently not supported
	[Dec10 07:16] overlayfs: idmapped layers are currently not supported
	[Dec10 07:17] overlayfs: idmapped layers are currently not supported
	[Dec10 07:19] overlayfs: idmapped layers are currently not supported
	[Dec10 07:21] overlayfs: idmapped layers are currently not supported
	[ +28.936234] overlayfs: idmapped layers are currently not supported
	[Dec10 07:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6659d520d9ed3ad91628947e1ee647d40841956d2e2bf56e55edcd7d14794290] <==
	{"level":"warn","ts":"2025-12-10T07:31:57.453025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:31:57.489858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:31:57.500053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:31:57.544948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:31:57.549508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:31:57.576533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:31:57.674487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34442","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T07:32:26.044105Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-10T07:32:26.044151Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-541318","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-10T07:32:26.044248Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T07:32:26.331286Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T07:32:26.331382Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T07:32:26.331422Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-10T07:32:26.331525Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-10T07:32:26.331537Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T07:32:26.331580Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T07:32:26.331607Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T07:32:26.331618Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-10T07:32:26.331589Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T07:32:26.331631Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T07:32:26.331659Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-10T07:32:26.335020Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-10T07:32:26.335100Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T07:32:26.335167Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-10T07:32:26.335197Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-541318","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [af021044de79f9c1effc01d7a8efa84473646366aa25f04b2662c4e063b180d6] <==
	{"level":"warn","ts":"2025-12-10T07:32:37.564700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.578568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.597393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.621706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.634823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.658160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.673297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.701929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.722228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.751670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.773638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.793578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.809718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.821382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.845453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.857422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.905702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.939105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.964427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:37.987904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:38.006681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:38.053551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:38.055258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:38.076585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:32:38.132372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44724","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:32:57 up  4:15,  0 user,  load average: 2.03, 1.69, 1.88
	Linux pause-541318 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0a3e372b595acf4fbc3c235329f7145b9b000692f6791757d1dccca0b168de1a] <==
	I1210 07:32:34.533860       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 07:32:34.609456       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 07:32:34.609696       1 main.go:148] setting mtu 1500 for CNI 
	I1210 07:32:34.609724       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 07:32:34.609740       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T07:32:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 07:32:34.803817       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 07:32:34.803848       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 07:32:34.803856       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 07:32:34.804556       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1210 07:32:38.853881       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1210 07:32:38.853993       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 07:32:38.854042       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1210 07:32:38.854113       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1210 07:32:39.904215       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 07:32:39.904254       1 metrics.go:72] Registering metrics
	I1210 07:32:39.904336       1 controller.go:711] "Syncing nftables rules"
	I1210 07:32:44.804034       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 07:32:44.804071       1 main.go:301] handling current node
	I1210 07:32:54.805299       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 07:32:54.805331       1 main.go:301] handling current node
	
	
	==> kindnet [c991b3b2a6538489a7be04598d8f75580e819a42c6822fbb47292427ab43b734] <==
	I1210 07:32:10.610252       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 07:32:10.610696       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 07:32:10.610859       1 main.go:148] setting mtu 1500 for CNI 
	I1210 07:32:10.610881       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 07:32:10.610902       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T07:32:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 07:32:10.902314       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 07:32:10.902442       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 07:32:10.902482       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 07:32:10.904463       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 07:32:11.203192       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 07:32:11.203301       1 metrics.go:72] Registering metrics
	I1210 07:32:11.203778       1 controller.go:711] "Syncing nftables rules"
	I1210 07:32:20.906179       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 07:32:20.906244       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6e6dcb614a0b7edbe84d9e9a5dd8ed8cc91ec9977d6e4ed5ed96e569a3bd5406] <==
	I1210 07:32:38.910249       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 07:32:38.911206       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1210 07:32:38.911507       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1210 07:32:38.931648       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 07:32:38.931768       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1210 07:32:38.931826       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 07:32:38.932143       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 07:32:38.932218       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 07:32:38.933368       1 aggregator.go:171] initial CRD sync complete...
	I1210 07:32:38.933393       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 07:32:38.933401       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 07:32:39.010192       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 07:32:39.011132       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 07:32:39.011341       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 07:32:39.014205       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 07:32:39.014237       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 07:32:39.049355       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 07:32:39.049552       1 cache.go:39] Caches are synced for autoregister controller
	I1210 07:32:39.053459       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 07:32:39.711721       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 07:32:41.030272       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 07:32:42.426059       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 07:32:42.472839       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 07:32:42.666900       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 07:32:42.763988       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [ea9f65ca8057e73829ccfe145b45c0c841718482de948514f1bd5814252609a7] <==
	W1210 07:32:26.073632       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083149       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.073679       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.073729       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.082784       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083300       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083370       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083527       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083588       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083642       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083760       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.082829       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083933       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.082860       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.082891       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.082924       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.083120       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084184       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084289       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084484       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084608       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084665       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084770       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084881       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 07:32:26.084787       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [862d72ac46eb18991fb9881b135e88bd4acae9159fb9f5601bc14e92f27e4546] <==
	I1210 07:32:06.571599       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 07:32:06.571677       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 07:32:06.576759       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 07:32:06.577177       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 07:32:06.577316       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 07:32:06.577691       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 07:32:06.588976       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 07:32:06.597333       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 07:32:06.603568       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1210 07:32:06.603639       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1210 07:32:06.603659       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1210 07:32:06.603664       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1210 07:32:06.603670       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 07:32:06.615636       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 07:32:06.615659       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 07:32:06.615697       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 07:32:06.617891       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 07:32:06.617909       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 07:32:06.617981       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 07:32:06.618055       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-541318"
	I1210 07:32:06.618096       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1210 07:32:06.618473       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 07:32:06.620143       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 07:32:06.624301       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-541318" podCIDRs=["10.244.0.0/24"]
	I1210 07:32:21.619683       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [a614d6b9c168bf531f880dca6ae80a6056d9702163b3dec1ea02beee515b1534] <==
	I1210 07:32:42.398791       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 07:32:42.407396       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 07:32:42.407474       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 07:32:42.407863       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 07:32:42.407903       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 07:32:42.407927       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 07:32:42.408060       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 07:32:42.408444       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1210 07:32:42.408491       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 07:32:42.410816       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 07:32:42.415525       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 07:32:42.417778       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 07:32:42.421106       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 07:32:42.450286       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 07:32:42.457073       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 07:32:42.457257       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 07:32:42.457313       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 07:32:42.457480       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 07:32:42.457606       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 07:32:42.457326       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 07:32:42.457704       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-541318"
	I1210 07:32:42.457750       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1210 07:32:42.459334       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 07:32:42.459374       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 07:32:42.459382       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [5cb758b2655c025c2ef35bbd2b50a39801df7474cfcd2f5aa41e71e9896a4b44] <==
	I1210 07:32:36.808080       1 server_linux.go:53] "Using iptables proxy"
	I1210 07:32:37.690285       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 07:32:38.994352       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 07:32:38.994399       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1210 07:32:38.994486       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 07:32:39.035642       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 07:32:39.035802       1 server_linux.go:132] "Using iptables Proxier"
	I1210 07:32:39.041172       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 07:32:39.042132       1 server.go:527] "Version info" version="v1.34.3"
	I1210 07:32:39.042390       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 07:32:39.043900       1 config.go:200] "Starting service config controller"
	I1210 07:32:39.043990       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 07:32:39.044038       1 config.go:106] "Starting endpoint slice config controller"
	I1210 07:32:39.044066       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 07:32:39.044103       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 07:32:39.044130       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 07:32:39.044860       1 config.go:309] "Starting node config controller"
	I1210 07:32:39.044924       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 07:32:39.044954       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 07:32:39.144763       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 07:32:39.144768       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 07:32:39.144801       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f059a046a23b865faa91554e1584660e4fed6a2d6bd2f0926b4791d35e35c13c] <==
	I1210 07:32:08.149724       1 server_linux.go:53] "Using iptables proxy"
	I1210 07:32:08.271613       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 07:32:08.384323       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 07:32:08.399946       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1210 07:32:08.400073       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 07:32:08.469820       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 07:32:08.469964       1 server_linux.go:132] "Using iptables Proxier"
	I1210 07:32:08.516093       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 07:32:08.516518       1 server.go:527] "Version info" version="v1.34.3"
	I1210 07:32:08.516726       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 07:32:08.518173       1 config.go:200] "Starting service config controller"
	I1210 07:32:08.518251       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 07:32:08.518297       1 config.go:106] "Starting endpoint slice config controller"
	I1210 07:32:08.518326       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 07:32:08.518362       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 07:32:08.518388       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 07:32:08.519059       1 config.go:309] "Starting node config controller"
	I1210 07:32:08.519122       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 07:32:08.519153       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 07:32:08.618961       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 07:32:08.618994       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 07:32:08.619040       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6038a6beafc76b1cb53784be90d95d126ec81718edc58b3aac2ebf5aa9eec3c0] <==
	E1210 07:31:58.771871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 07:31:58.771927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 07:31:58.772035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 07:31:58.772091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 07:31:58.772130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:31:58.781404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 07:31:59.584762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 07:31:59.596534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 07:31:59.638713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 07:31:59.666301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 07:31:59.773926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 07:31:59.777974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 07:31:59.782681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 07:31:59.829363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 07:31:59.846761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 07:31:59.847936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 07:31:59.895169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 07:31:59.910821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:32:00.085897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1210 07:32:02.863001       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 07:32:26.040371       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1210 07:32:26.040410       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1210 07:32:26.050922       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1210 07:32:26.050969       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1210 07:32:26.050984       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b181741de03b3e5f209b1a1b0315ca37b5ee3580f1c791f79562bc7631ceb6bc] <==
	I1210 07:32:38.814580       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 07:32:38.838113       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 07:32:38.841980       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 07:32:38.842099       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 07:32:38.842154       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1210 07:32:38.871506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 07:32:38.872224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 07:32:38.872279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 07:32:38.872341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 07:32:38.872385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 07:32:38.872419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 07:32:38.872460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 07:32:38.872529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 07:32:38.872570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 07:32:38.871609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 07:32:38.885910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 07:32:38.886073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:32:38.886190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 07:32:38.886279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 07:32:38.886374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 07:32:38.886481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 07:32:38.886576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 07:32:38.886669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 07:32:38.900537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1210 07:32:40.243044       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.157993    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3a1323a644a525348a9dddc54fd3fcdc" pod="kube-system/etcd-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: I1210 07:32:34.167970    2031 scope.go:117] "RemoveContainer" containerID="ea9f65ca8057e73829ccfe145b45c0c841718482de948514f1bd5814252609a7"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.168569    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fb15b064e2720be2ff76387c426d515c" pod="kube-system/kube-scheduler-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.168742    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-7jvwx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f25948af-12e6-4f99-b754-991454a2deae" pod="kube-system/kindnet-7jvwx"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.168891    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jft5p\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fdc7b9e9-59db-4a4c-b397-8270fdccf52c" pod="kube-system/kube-proxy-jft5p"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.169032    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-x88t5\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="69710b36-71f7-49c0-9c7b-29fce02de488" pod="kube-system/coredns-66bc5c9577-x88t5"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.169172    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3a1323a644a525348a9dddc54fd3fcdc" pod="kube-system/etcd-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.169474    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a7c2e775428589d03a18bcbad852b170" pod="kube-system/kube-apiserver-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: I1210 07:32:34.220222    2031 scope.go:117] "RemoveContainer" containerID="862d72ac46eb18991fb9881b135e88bd4acae9159fb9f5601bc14e92f27e4546"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.220750    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jft5p\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fdc7b9e9-59db-4a4c-b397-8270fdccf52c" pod="kube-system/kube-proxy-jft5p"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.220941    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-x88t5\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="69710b36-71f7-49c0-9c7b-29fce02de488" pod="kube-system/coredns-66bc5c9577-x88t5"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.221093    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="12c8eace15e30015d05a1b70d5531924" pod="kube-system/kube-controller-manager-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.221464    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3a1323a644a525348a9dddc54fd3fcdc" pod="kube-system/etcd-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.221671    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a7c2e775428589d03a18bcbad852b170" pod="kube-system/kube-apiserver-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.221838    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-541318\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fb15b064e2720be2ff76387c426d515c" pod="kube-system/kube-scheduler-pause-541318"
	Dec 10 07:32:34 pause-541318 kubelet[2031]: E1210 07:32:34.221997    2031 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-7jvwx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f25948af-12e6-4f99-b754-991454a2deae" pod="kube-system/kindnet-7jvwx"
	Dec 10 07:32:38 pause-541318 kubelet[2031]: E1210 07:32:38.820904    2031 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-541318\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-541318' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 10 07:32:38 pause-541318 kubelet[2031]: E1210 07:32:38.821116    2031 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-541318\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-541318' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 10 07:32:38 pause-541318 kubelet[2031]: E1210 07:32:38.822061    2031 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-541318\" is forbidden: User \"system:node:pause-541318\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-541318' and this object" podUID="fb15b064e2720be2ff76387c426d515c" pod="kube-system/kube-scheduler-pause-541318"
	Dec 10 07:32:38 pause-541318 kubelet[2031]: E1210 07:32:38.823084    2031 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-541318\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-541318' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 10 07:32:38 pause-541318 kubelet[2031]: E1210 07:32:38.856565    2031 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-7jvwx\" is forbidden: User \"system:node:pause-541318\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-541318' and this object" podUID="f25948af-12e6-4f99-b754-991454a2deae" pod="kube-system/kindnet-7jvwx"
	Dec 10 07:32:38 pause-541318 kubelet[2031]: E1210 07:32:38.891241    2031 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-jft5p\" is forbidden: User \"system:node:pause-541318\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-541318' and this object" podUID="fdc7b9e9-59db-4a4c-b397-8270fdccf52c" pod="kube-system/kube-proxy-jft5p"
	Dec 10 07:32:52 pause-541318 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 07:32:52 pause-541318 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 07:32:52 pause-541318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-541318 -n pause-541318
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-541318 -n pause-541318: exit status 2 (385.485114ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-541318 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (7200.084s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-957064 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-6sj2h" [50355117-c1ff-4bac-9477-1aa78a14a213] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-6sj2h" [50355117-c1ff-4bac-9477-1aa78a14a213] Running
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (35m27s)
		TestNetworkPlugins/group/bridge (45s)
		TestNetworkPlugins/group/bridge/Start (45s)
		TestNetworkPlugins/group/enable-default-cni (1m31s)
		TestNetworkPlugins/group/enable-default-cni/NetCatPod (10s)
		TestStartStop (37m53s)
		TestStartStop/group (45s)

                                                
                                                
goroutine 6491 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 30 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40004668c0, 0x4000737bb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x400069a060, {0x534c680, 0x2c, 0x2c}, {0x4000737d08?, 0x125774?, 0x5375080?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x400072bc20)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x400072bc20)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 6212 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x4000082070}, 0x4001319740, 0x4001319788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x4000082070}, 0x2a?, 0x4001319740, 0x4001319788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x4000082070?}, 0x0?, 0x4001319750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42f0?, 0x4000224080?, 0x400142e540?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6192
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1341 [IO wait, 110 minutes]:
internal/poll.runtime_pollWait(0xffff57e69000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4000afbd00?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x4000afbd00)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x4000afbd00)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40003be100)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40003be100)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40014ff200, {0x36d4020, 0x40003be100})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40014ff200)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1339
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 230 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x4000224080?}, 0x400031ed80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 223
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5227 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x4000082070}, 0x40000a5740, 0x40000a5788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x4000082070}, 0xb6?, 0x40000a5740, 0x40000a5788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x4000082070?}, 0x0?, 0x40000a5750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42f0?, 0x4000224080?, 0x4000103180?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5223
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 877 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x4000082070}, 0x40000a5740, 0x4001352f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x4000082070}, 0x9b?, 0x40000a5740, 0x40000a5788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x4000082070?}, 0x0?, 0x40000a5750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42f0?, 0x4000224080?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 862
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 700 [IO wait, 114 minutes]:
internal/poll.runtime_pollWait(0xffff57a92400, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400040bb80?, 0x297c4?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400040bb80)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x400040bb80)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40015ff380)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40015ff380)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40004e6800, {0x36d4020, 0x40015ff380})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40004e6800)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 698
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 5850 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x40017c8050, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40017c8040)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400178ea80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000263ce0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x4000082070?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x4000082070}, 0x4000877f38, {0x369e540, 0x40000e5ef0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42f0?, {0x369e540?, 0x40000e5ef0?}, 0x70?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001503170, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5847
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3799 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x4000082070}, 0x400134ff40, 0x400134ff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x4000082070}, 0x60?, 0x400134ff40, 0x400134ff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x4000082070?}, 0x400066abe0?, 0x95c64?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400171e900?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3785
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4063 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4062
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6474 [select]:
os/exec.(*Cmd).watchCtx(0x400031e780, 0x4001912460)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 6471
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4062 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x4000082070}, 0x40015ed740, 0x40015ed788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x4000082070}, 0xb8?, 0x40015ed740, 0x40015ed788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x4000082070?}, 0x400171f200?, 0x400044cc80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400171e780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4083
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 231 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40016153e0, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 223
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1930 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x4000671080, 0x4001c16700)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1929
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 201 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 200
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 200 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x4000082070}, 0x4000873f40, 0x4000873f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x4000082070}, 0x50?, 0x4000873f40, 0x4000873f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x4000082070?}, 0x4000671080?, 0x40017fcdc0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400031e780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 231
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 199 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4000b9cbd0, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000b9cbc0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40016153e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40000838f0?, 0x36e65c8?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x4000082070?}, 0x296bf25?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x4000082070}, 0x400010ef38, {0x369e540, 0x40018fa030}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x12?, {0x369e540?, 0x40018fa030?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40019024e0, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 231
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5222 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x4000224080?}, 0x4000103180?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5213
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6473 [IO wait]:
internal/poll.runtime_pollWait(0xffff57e68200, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001abc5a0?, 0x4001543c2a?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001abc5a0, {0x4001543c2a, 0x43d6, 0x43d6})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x4001af60f0, {0x4001543c2a?, 0x4001b34d48?, 0xcc76c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x4001b1e630, {0x369c908, 0x40019b6028})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cb00, 0x4001b1e630}, {0x369c908, 0x40019b6028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4001af60f0?, {0x369cb00, 0x4001b1e630})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x4001af60f0, {0x369cb00, 0x4001b1e630})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cb00, 0x4001b1e630}, {0x369c988, 0x4001af60f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4000082070?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6471
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 3545 [chan receive, 8 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x400149afc0, 0x4001b938a8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3242
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5223 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400178f8c0, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5213
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5526 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x4000082070}, 0x40015a7f40, 0x40015a7f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x4000082070}, 0x0?, 0x40015a7f40, 0x40015a7f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x4000082070?}, 0x36e6638?, 0x4001c163f0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4001c16310?, 0x0?, 0x4001306700?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5522
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5852 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5851
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5525 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001b88e50, 0xd)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001b88e40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40017f6180)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002ff420?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x4000082070?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x4000082070}, 0x40013faf38, {0x369e540, 0x4001578ba0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42f0?, {0x369e540?, 0x4001578ba0?}, 0x70?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40015a2120, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5522
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 876 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40003c7910, 0x2c)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40003c7900)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400178e780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40013722a0?, 0x54bdd8?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x4000082070?}, 0x13?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x4000082070}, 0x4000109f38, {0x369e540, 0x400130a990}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369e540?, 0x400130a990?}, 0xc0?, 0x40014a8f00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40019fee70, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 862
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3798 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40005fe990, 0x17)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40005fe980)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400087af60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001372b60?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x4000082070?}, 0x40015e66a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x4000082070}, 0x40015a5f38, {0x369e540, 0x4001590060}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40015e67a8?, {0x369e540?, 0x4001590060?}, 0xe0?, 0x36e6638?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001c30030, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3785
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5846 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x4000224080?}, 0x4001306380?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5845
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6463 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6462
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5228 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5227
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1950 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x400031f380, 0x4001696fc0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1949
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3242 [chan receive, 36 minutes]:
testing.(*T).Run(0x400149a380, {0x296d71f?, 0xcf37b73cecf?}, 0x4001b938a8)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x400149a380)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x400149a380, 0x339bb10)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 6458 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001a0e900, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6479
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5521 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x4000224080?}, 0x4001425500?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5488
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3303 [chan receive, 39 minutes]:
testing.(*T).Run(0x400149a8c0, {0x296d71f?, 0x40013f8f58?}, 0x339bd40)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x400149a8c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x400149a8c0, 0x339bb58)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1041 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x4001b50000, 0x4001c16230)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1024
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3618 [chan receive]:
testing.(*T).Run(0x400149ba40, {0x297644f?, 0x368ae10?}, 0x4001b1ee10)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x400149ba40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:148 +0x724
testing.tRunner(0x400149ba40, 0x4000afb400)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3545
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1106 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x400131b200, 0x4001affb20)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1105
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3800 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3799
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 861 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x4000224080?}, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 833
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5527 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5526
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6497 [IO wait]:
internal/poll.runtime_pollWait(0xffff57e68800, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4000afb700?, 0x4000869000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4000afb700, {0x4000869000, 0x1800, 0x1800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
net.(*netFD).Read(0x4000afb700, {0x4000869000?, 0x4000869059?, 0x5?})
	/usr/local/go/src/net/fd_posix.go:68 +0x28
net.(*conn).Read(0x4001af6260, {0x4000869000?, 0x40015a68a8?, 0x8b27c?})
	/usr/local/go/src/net/net.go:196 +0x34
crypto/tls.(*atLeastReader).Read(0x4001c13608, {0x4000869000?, 0x40015a6908?, 0x2cbb64?})
	/usr/local/go/src/crypto/tls/conn.go:816 +0x38
bytes.(*Buffer).ReadFrom(0x4001527b28, {0x369ec60, 0x4001c13608})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
crypto/tls.(*Conn).readFromUntil(0x4001527888, {0xffff57a93800, 0x4001c12c00}, 0x40015a69b0?)
	/usr/local/go/src/crypto/tls/conn.go:838 +0xcc
crypto/tls.(*Conn).readRecordOrCCS(0x4001527888, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:627 +0x340
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:589
crypto/tls.(*Conn).Read(0x4001527888, {0x400139f000, 0x1000, 0x4000000000?})
	/usr/local/go/src/crypto/tls/conn.go:1392 +0x14c
bufio.(*Reader).Read(0x4001b7ac00, {0x400176ac84, 0x9, 0x542a60?})
	/usr/local/go/src/bufio/bufio.go:245 +0x188
io.ReadAtLeast({0x369cba0, 0x4001b7ac00}, {0x400176ac84, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x98
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0x400176ac84, 0x9, 0x4000000015?}, {0x369cba0?, 0x4001b7ac00?})
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/frame.go:242 +0x58
golang.org/x/net/http2.(*Framer).ReadFrameHeader(0x400176ac40)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/frame.go:505 +0x60
golang.org/x/net/http2.(*Framer).ReadFrame(0x400176ac40)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/frame.go:564 +0x20
golang.org/x/net/http2.(*clientConnReadLoop).run(0x40015a6f98)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/transport.go:2208 +0xb8
golang.org/x/net/http2.(*ClientConn).readLoop(0x4001306540)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/transport.go:2077 +0x4c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 6464
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/transport.go:866 +0xa90

                                                
                                                
goroutine 2039 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x4000671080, 0x4001696bd0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1496
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3784 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x4000224080?}, 0x400149b6c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3780
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6471 [syscall]:
syscall.Syscall6(0x5f, 0x3, 0x14, 0x40017abc38, 0x4, 0x4000125320, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x40017abd98?, 0x1929a0?, 0xffffd5bb91a3?, 0x0?, 0x4001930240?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x4001b88200)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x40017abd68?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x400031e780)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x400031e780)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x4001425500, 0x400031e780)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:104 +0x154
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0x4001425500)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x44
testing.tRunner(0x4001425500, 0x4001b1e4e0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3617
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1071 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x400031f080, 0x4001c17ce0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 824
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 862 [chan receive, 112 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400178e780, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 833
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 6472 [IO wait]:
internal/poll.runtime_pollWait(0xffff57a91600, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001abc4e0?, 0x400156f2b6?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001abc4e0, {0x400156f2b6, 0x54a, 0x54a})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x4001af60d8, {0x400156f2b6?, 0x4001b36548?, 0xcc76c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x4001b1e600, {0x369c908, 0x40019b6020})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cb00, 0x4001b1e600}, {0x369c908, 0x40019b6020}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4001af60d8?, {0x369cb00, 0x4001b1e600})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x4001af60d8, {0x369cb00, 0x4001b1e600})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cb00, 0x4001b1e600}, {0x369c988, 0x4001af60d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4001425500?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6471
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 878 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 877
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3675 [chan receive, 36 minutes]:
testing.(*testState).waitParallel(0x40006e47d0)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40015ae700)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40015ae700)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40015ae700)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40015ae700, 0x40013aaa80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3545
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5226 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40005fe510, 0xf)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40005fe500)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400178f8c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001380b60?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x4000082070?}, 0x40013f0538?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x4000082070}, 0x400134ef38, {0x369e540, 0x4001b5e330}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e540?, 0x4001b5e330?}, 0x0?, 0x36e6638?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40015024a0, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5223
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3500 [chan receive]:
testing.(*testState).waitParallel(0x40006e47d0)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1906 +0x4c4
testing.tRunner(0x400183c8c0, 0x339bd40)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3303
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5522 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40017f6180, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5488
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4083 [chan receive, 28 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40016143c0, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4078
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3617 [chan receive]:
testing.(*T).Run(0x400149b880, {0x296d724?, 0x368ae10?}, 0x4001b1e4e0)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x400149b880)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x4f4
testing.tRunner(0x400149b880, 0x4000afb380)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3545
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1251 [select, 110 minutes]:
net/http.(*persistConn).writeLoop(0x40015f8480)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1200
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 1250 [select, 110 minutes]:
net/http.(*persistConn).readLoop(0x40015f8480)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1200
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 6479 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36e6638, 0x4000382150}, {0x36d4680, 0x4000415b60}, 0x1, 0x0, 0x4001435ba0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/loop.go:66 +0x158
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36e6638?, 0x40003fe9a0?}, 0x3b9aca00, 0x4001435dc8?, 0x1, 0x4001435ba0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:48 +0x8c
k8s.io/minikube/test/integration.PodWait({0x36e6638, 0x40003fe9a0}, 0x4001425880, {0x400169a640, 0x19}, {0x2971411, 0x7}, {0x297866a, 0xa}, 0xd18c2e2800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:380 +0x22c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0x4001425880)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:163 +0x2a0
testing.tRunner(0x4001425880, 0x4001b1ee10)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3618
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1586 [chan receive, 82 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400087a120, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1520
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1585 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x4000224080?}, 0x400031ef00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1520
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6213 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6212
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4082 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x4000224080?}, 0x400149a700?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4078
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6192 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400178eae0, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6187
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1575 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x4000082070}, 0x40015eaf40, 0x4001351f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x4000082070}, 0x98?, 0x40015eaf40, 0x40015eaf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x4000082070?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400149b6c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1586
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3785 [chan receive, 33 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400087af60, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3780
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4325 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x4000082070}, 0x4001b32f40, 0x4001b32f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x4000082070}, 0x88?, 0x4001b32f40, 0x4001b32f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x4000082070?}, 0x36e6638?, 0x4001595880?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000671080?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4339
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1574 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4000b9cf50, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000b9cf40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400087a120)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400017b260?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x4000082070?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x4000082070}, 0x40013d1f38, {0x369e540, 0x40019f8cc0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42f0?, {0x369e540?, 0x40019f8cc0?}, 0xe0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001699990, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1586
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1576 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1575
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4326 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4325
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6462 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x4000082070}, 0x4001c2ff40, 0x4001c2ff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x4000082070}, 0x6b?, 0x4001c2ff40, 0x4001c2ff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x4000082070?}, 0x0?, 0x4001c2ff50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42f0?, 0x4000224080?, 0x4000671800?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6458
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6461 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40005ff390, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40005ff380)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001a0e900)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001af1880?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x4000082070?}, 0x40000a1ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x4000082070}, 0x40013fcf38, {0x369e540, 0x40019f1500}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e540?, 0x40019f1500?}, 0xe0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400144a520, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6458
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4061 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4000b9d790, 0x16)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000b9d780)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40016143c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001380850?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x4000082070?}, 0x4001318ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x4000082070}, 0x40013d2f38, {0x369e540, 0x4001308b10}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001318fa8?, {0x369e540?, 0x4001308b10?}, 0xe0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001c10620, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4083
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5851 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x4000082070}, 0x40015eaf40, 0x40015eaf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x4000082070}, 0x0?, 0x40015eaf40, 0x40015eaf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x4000082070?}, 0x36e6638?, 0x4001c170a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4001c16fc0?, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5847
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5847 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400178ea80, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5845
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4339 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001614ea0, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4337
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4338 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x4000224080?}, 0x400142e540?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4337
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4324 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x40005fe7d0, 0x2)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40005fe7c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001614ea0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40013728c0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x4000082070?}, 0x40015ea6a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x4000082070}, 0x40015abf38, {0x369e540, 0x40019f8060}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40015ea7a8?, {0x369e540?, 0x40019f8060?}, 0xe0?, 0x400171e780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001c300b0, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4339
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6211 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40016a5f10, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40016a5f00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400178eae0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001af07e0?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x4000082070?}, 0x400044b0b0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x4000082070}, 0x4001350f38, {0x369e540, 0x4001579770}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x400044b030?, {0x369e540?, 0x4001579770?}, 0x1?, 0x36e6638?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001698350, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6192
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6457 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x4000224080?}, 0x4000671800?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6479
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6191 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x4000224080?}, 0x400142e540?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6187
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                    

Test pass (242/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.02
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.3/json-events 3.53
14 TestDownloadOnly/v1.34.3/cached-images 0.52
15 TestDownloadOnly/v1.34.3/binaries 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.27
18 TestDownloadOnly/v1.34.3/DeleteAll 0.21
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-rc.1/json-events 3.22
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0.45
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 1.07
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 145.42
40 TestAddons/serial/GCPAuth/Namespaces 0.22
41 TestAddons/serial/GCPAuth/FakeCredentials 10.85
57 TestAddons/StoppedEnableDisable 12.48
58 TestCertOptions 51.02
59 TestCertExpiration 260.88
61 TestForceSystemdFlag 45.18
62 TestForceSystemdEnv 57.97
67 TestErrorSpam/setup 40.28
68 TestErrorSpam/start 1.05
69 TestErrorSpam/status 1.17
70 TestErrorSpam/pause 6.61
71 TestErrorSpam/unpause 5.26
72 TestErrorSpam/stop 1.52
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 60.7
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 29.23
79 TestFunctional/serial/KubeContext 0.07
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
84 TestFunctional/serial/CacheCmd/cache/add_local 1.28
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 32.64
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.45
95 TestFunctional/serial/LogsFileCmd 1.52
96 TestFunctional/serial/InvalidService 4.16
98 TestFunctional/parallel/ConfigCmd 0.47
99 TestFunctional/parallel/DashboardCmd 15.04
100 TestFunctional/parallel/DryRun 0.66
101 TestFunctional/parallel/InternationalLanguage 0.29
102 TestFunctional/parallel/StatusCmd 1.13
106 TestFunctional/parallel/ServiceCmdConnect 8.59
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 20.57
110 TestFunctional/parallel/SSHCmd 0.77
111 TestFunctional/parallel/CpCmd 2.04
113 TestFunctional/parallel/FileSync 0.36
114 TestFunctional/parallel/CertSync 1.83
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.77
122 TestFunctional/parallel/License 0.41
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.34
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 8.22
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
136 TestFunctional/parallel/ProfileCmd/profile_list 0.42
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.59
138 TestFunctional/parallel/MountCmd/any-port 8.42
139 TestFunctional/parallel/ServiceCmd/List 0.54
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
142 TestFunctional/parallel/ServiceCmd/Format 0.43
143 TestFunctional/parallel/ServiceCmd/URL 0.38
144 TestFunctional/parallel/MountCmd/specific-port 2.18
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.89
146 TestFunctional/parallel/Version/short 0.11
147 TestFunctional/parallel/Version/components 1.13
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
152 TestFunctional/parallel/ImageCommands/ImageBuild 3.99
153 TestFunctional/parallel/ImageCommands/Setup 0.66
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.96
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.34
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.1
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.69
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
161 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
162 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
163 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 3.56
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 1.21
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.05
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.05
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.32
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.8
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.14
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 0.95
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 0.97
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.47
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.43
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.2
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.74
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 2.19
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.28
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.74
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.56
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.27
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.42
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.4
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.4
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.82
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 2.11
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.06
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.5
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.23
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.24
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.25
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.24
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 3.78
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.25
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.34
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.84
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.1
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.37
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.52
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.78
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.46
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.15
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.15
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.18
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 175.81
265 TestMultiControlPlane/serial/DeployApp 6.55
266 TestMultiControlPlane/serial/PingHostFromPods 1.9
267 TestMultiControlPlane/serial/AddWorkerNode 62.24
268 TestMultiControlPlane/serial/NodeLabels 0.1
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.1
270 TestMultiControlPlane/serial/CopyFile 20.95
271 TestMultiControlPlane/serial/StopSecondaryNode 12.88
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
273 TestMultiControlPlane/serial/RestartSecondaryNode 29.34
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.41
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 123.4
276 TestMultiControlPlane/serial/DeleteSecondaryNode 12.11
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.83
278 TestMultiControlPlane/serial/StopCluster 36.16
279 TestMultiControlPlane/serial/RestartCluster 95.7
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
281 TestMultiControlPlane/serial/AddSecondaryNode 67.34
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.13
287 TestJSONOutput/start/Command 59.37
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.83
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 43.3
313 TestKicCustomNetwork/use_default_bridge_network 41.41
314 TestKicExistingNetwork 43.14
315 TestKicCustomSubnet 41.82
316 TestKicStaticIP 39.13
317 TestMainNoArgs 0.06
318 TestMinikubeProfile 92.25
321 TestMountStart/serial/StartWithMountFirst 9.03
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.74
324 TestMountStart/serial/VerifyMountSecond 0.31
325 TestMountStart/serial/DeleteFirst 1.76
326 TestMountStart/serial/VerifyMountPostDelete 0.29
327 TestMountStart/serial/Stop 1.3
328 TestMountStart/serial/RestartStopped 8.41
329 TestMountStart/serial/VerifyMountPostStop 0.28
332 TestMultiNode/serial/FreshStart2Nodes 89.73
333 TestMultiNode/serial/DeployApp2Nodes 5.07
334 TestMultiNode/serial/PingHostFrom2Pods 0.92
335 TestMultiNode/serial/AddNode 31.77
336 TestMultiNode/serial/MultiNodeLabels 0.1
337 TestMultiNode/serial/ProfileList 0.76
338 TestMultiNode/serial/CopyFile 11.09
339 TestMultiNode/serial/StopNode 2.48
340 TestMultiNode/serial/StartAfterStop 9.17
341 TestMultiNode/serial/RestartKeepsNodes 76.73
342 TestMultiNode/serial/DeleteNode 5.57
343 TestMultiNode/serial/StopMultiNode 24.03
344 TestMultiNode/serial/RestartMultiNode 57.79
345 TestMultiNode/serial/ValidateNameConflict 44.43
350 TestPreload 122.13
352 TestScheduledStopUnix 119.48
355 TestInsufficientStorage 9.28
356 TestRunningBinaryUpgrade 301.26
359 TestMissingContainerUpgrade 121.56
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 61.18
363 TestNoKubernetes/serial/StartWithStopK8s 19.37
364 TestNoKubernetes/serial/Start 8.23
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
367 TestNoKubernetes/serial/ProfileList 0.7
368 TestNoKubernetes/serial/Stop 1.44
369 TestNoKubernetes/serial/StartNoArgs 8.77
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
371 TestStoppedBinaryUpgrade/Setup 1.75
372 TestStoppedBinaryUpgrade/Upgrade 313.05
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.99
382 TestPause/serial/Start 63.42
383 TestPause/serial/SecondStartNoReconfiguration 27.51
x
+
TestDownloadOnly/v1.28.0/json-events (7.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-789794 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-789794 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.020141181s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1210 06:09:20.259335  364265 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1210 06:09:20.259420  364265 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-789794
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-789794: exit status 85 (96.952513ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-789794 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-789794 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:09:13
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:09:13.287929  364270 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:09:13.288129  364270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:09:13.288143  364270 out.go:374] Setting ErrFile to fd 2...
	I1210 06:09:13.288149  364270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:09:13.288420  364270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	W1210 06:09:13.288559  364270 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22094-362392/.minikube/config/config.json: open /home/jenkins/minikube-integration/22094-362392/.minikube/config/config.json: no such file or directory
	I1210 06:09:13.288975  364270 out.go:368] Setting JSON to true
	I1210 06:09:13.289843  364270 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10306,"bootTime":1765336648,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:09:13.289916  364270 start.go:143] virtualization:  
	I1210 06:09:13.295443  364270 out.go:99] [download-only-789794] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1210 06:09:13.295639  364270 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22094-362392/.minikube/cache/preloaded-tarball: no such file or directory
	I1210 06:09:13.295719  364270 notify.go:221] Checking for updates...
	I1210 06:09:13.298795  364270 out.go:171] MINIKUBE_LOCATION=22094
	I1210 06:09:13.302043  364270 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:09:13.305156  364270 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:09:13.308391  364270 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:09:13.311491  364270 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1210 06:09:13.317700  364270 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 06:09:13.318011  364270 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:09:13.347463  364270 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:09:13.347590  364270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:09:13.403632  364270 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-10 06:09:13.394653077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:09:13.403769  364270 docker.go:319] overlay module found
	I1210 06:09:13.406794  364270 out.go:99] Using the docker driver based on user configuration
	I1210 06:09:13.406842  364270 start.go:309] selected driver: docker
	I1210 06:09:13.406853  364270 start.go:927] validating driver "docker" against <nil>
	I1210 06:09:13.406961  364270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:09:13.459526  364270 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-10 06:09:13.450254932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:09:13.459687  364270 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:09:13.459983  364270 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1210 06:09:13.460149  364270 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 06:09:13.463498  364270 out.go:171] Using Docker driver with root privileges
	I1210 06:09:13.467009  364270 cni.go:84] Creating CNI manager for ""
	I1210 06:09:13.467079  364270 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:09:13.467092  364270 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:09:13.467173  364270 start.go:353] cluster config:
	{Name:download-only-789794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-789794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:09:13.470206  364270 out.go:99] Starting "download-only-789794" primary control-plane node in "download-only-789794" cluster
	I1210 06:09:13.470227  364270 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:09:13.473288  364270 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:09:13.473327  364270 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 06:09:13.473439  364270 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:09:13.489150  364270 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 06:09:13.489346  364270 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 06:09:13.489454  364270 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 06:09:13.526233  364270 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1210 06:09:13.526275  364270 cache.go:65] Caching tarball of preloaded images
	I1210 06:09:13.526451  364270 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 06:09:13.529684  364270 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1210 06:09:13.529717  364270 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1210 06:09:13.619070  364270 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1210 06:09:13.619236  364270 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22094-362392/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-789794 host does not exist
	  To start a cluster, run: "minikube start -p download-only-789794"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-789794
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (3.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-091542 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-091542 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.530064201s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (3.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
I1210 06:09:24.391692  364265 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
I1210 06:09:24.599106  364265 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
I1210 06:09:24.753370  364265 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
--- PASS: TestDownloadOnly/v1.34.3/cached-images (0.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
--- PASS: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-091542
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-091542: exit status 85 (271.279584ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-789794 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-789794 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ delete  │ -p download-only-789794                                                                                                                                                   │ download-only-789794 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ start   │ -o=json --download-only -p download-only-091542 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-091542 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:09:20
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:09:20.749726  364473 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:09:20.749903  364473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:09:20.749931  364473 out.go:374] Setting ErrFile to fd 2...
	I1210 06:09:20.749950  364473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:09:20.750223  364473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:09:20.750671  364473 out.go:368] Setting JSON to true
	I1210 06:09:20.751524  364473 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10313,"bootTime":1765336648,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:09:20.751620  364473 start.go:143] virtualization:  
	I1210 06:09:20.754921  364473 out.go:99] [download-only-091542] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:09:20.755212  364473 notify.go:221] Checking for updates...
	I1210 06:09:20.758199  364473 out.go:171] MINIKUBE_LOCATION=22094
	I1210 06:09:20.761540  364473 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:09:20.764667  364473 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:09:20.767647  364473 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:09:20.770576  364473 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1210 06:09:20.776293  364473 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 06:09:20.776566  364473 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:09:20.800530  364473 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:09:20.800643  364473 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:09:20.862567  364473 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-10 06:09:20.853311095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:09:20.862684  364473 docker.go:319] overlay module found
	I1210 06:09:20.865668  364473 out.go:99] Using the docker driver based on user configuration
	I1210 06:09:20.865722  364473 start.go:309] selected driver: docker
	I1210 06:09:20.865734  364473 start.go:927] validating driver "docker" against <nil>
	I1210 06:09:20.865841  364473 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:09:20.923924  364473 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-10 06:09:20.914968657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:09:20.924085  364473 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:09:20.924365  364473 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1210 06:09:20.924521  364473 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 06:09:20.927587  364473 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-091542 host does not exist
	  To start a cluster, run: "minikube start -p download-only-091542"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-091542
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (3.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-433687 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-433687 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.218510215s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (3.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
I1210 06:09:28.820111  364265 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
I1210 06:09:28.967846  364265 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
I1210 06:09:29.123743  364265 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
--- PASS: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
--- PASS: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-433687
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-433687: exit status 85 (82.109149ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-789794 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-789794 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ delete  │ -p download-only-789794                                                                                                                                                        │ download-only-789794 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ start   │ -o=json --download-only -p download-only-091542 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-091542 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ delete  │ -p download-only-091542                                                                                                                                                        │ download-only-091542 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ start   │ -o=json --download-only -p download-only-433687 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-433687 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:09:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:09:25.585517  364701 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:09:25.586099  364701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:09:25.586138  364701 out.go:374] Setting ErrFile to fd 2...
	I1210 06:09:25.586158  364701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:09:25.586452  364701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:09:25.586908  364701 out.go:368] Setting JSON to true
	I1210 06:09:25.587756  364701 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10318,"bootTime":1765336648,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:09:25.587856  364701 start.go:143] virtualization:  
	I1210 06:09:25.591429  364701 out.go:99] [download-only-433687] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:09:25.591684  364701 notify.go:221] Checking for updates...
	I1210 06:09:25.594779  364701 out.go:171] MINIKUBE_LOCATION=22094
	I1210 06:09:25.597811  364701 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:09:25.600857  364701 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:09:25.603945  364701 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:09:25.607007  364701 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1210 06:09:25.612888  364701 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 06:09:25.613232  364701 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:09:25.651910  364701 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:09:25.652083  364701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:09:25.708708  364701 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 06:09:25.699224122 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:09:25.708812  364701 docker.go:319] overlay module found
	I1210 06:09:25.711927  364701 out.go:99] Using the docker driver based on user configuration
	I1210 06:09:25.711975  364701 start.go:309] selected driver: docker
	I1210 06:09:25.711983  364701 start.go:927] validating driver "docker" against <nil>
	I1210 06:09:25.712102  364701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:09:25.766907  364701 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 06:09:25.757695551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:09:25.767070  364701 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:09:25.767332  364701 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1210 06:09:25.767483  364701 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 06:09:25.770571  364701 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-433687 host does not exist
	  To start a cluster, run: "minikube start -p download-only-433687"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-433687
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (1.07s)

                                                
                                                
=== RUN   TestBinaryMirror
I1210 06:09:31.198558  364265 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-172562 --alsologtostderr --binary-mirror http://127.0.0.1:37171 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-172562" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-172562
--- PASS: TestBinaryMirror (1.07s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-241520
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-241520: exit status 85 (77.49506ms)

                                                
                                                
-- stdout --
	* Profile "addons-241520" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-241520"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-241520
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-241520: exit status 85 (64.058392ms)

                                                
                                                
-- stdout --
	* Profile "addons-241520" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-241520"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (145.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-241520 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-241520 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m25.421283392s)
--- PASS: TestAddons/Setup (145.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-241520 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-241520 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-241520 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-241520 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [3ccff718-6015-45fc-bd06-1b60258f39ae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [3ccff718-6015-45fc-bd06-1b60258f39ae] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003483509s
addons_test.go:696: (dbg) Run:  kubectl --context addons-241520 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-241520 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-241520 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-241520 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-241520
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-241520: (12.194261054s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-241520
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-241520
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-241520
--- PASS: TestAddons/StoppedEnableDisable (12.48s)

                                                
                                    
x
+
TestCertOptions (51.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-159261 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-159261 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (47.7144688s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-159261 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-159261 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-159261 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-159261" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-159261
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-159261: (2.460583233s)
--- PASS: TestCertOptions (51.02s)

                                                
                                    
x
+
TestCertExpiration (260.88s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-751667 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-751667 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (55.690115798s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-751667 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-751667 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (22.404174547s)
helpers_test.go:176: Cleaning up "cert-expiration-751667" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-751667
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-751667: (2.783880633s)
--- PASS: TestCertExpiration (260.88s)

                                                
                                    
x
+
TestForceSystemdFlag (45.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-611499 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1210 07:33:38.175470  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-611499 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.084722346s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-611499 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-611499" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-611499
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-611499: (2.775884535s)
--- PASS: TestForceSystemdFlag (45.18s)

                                                
                                    
x
+
TestForceSystemdEnv (57.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-925156 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-925156 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (53.833085354s)
helpers_test.go:176: Cleaning up "force-systemd-env-925156" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-925156
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-925156: (4.139894431s)
--- PASS: TestForceSystemdEnv (57.97s)

                                                
                                    
x
+
TestErrorSpam/setup (40.28s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-206262 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-206262 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-206262 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-206262 --driver=docker  --container-runtime=crio: (40.276334573s)
--- PASS: TestErrorSpam/setup (40.28s)

                                                
                                    
x
+
TestErrorSpam/start (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 start --dry-run
--- PASS: TestErrorSpam/start (1.05s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (6.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 pause: exit status 80 (2.026063888s)

                                                
                                                
-- stdout --
	* Pausing node nospam-206262 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 pause: exit status 80 (2.346466244s)

                                                
                                                
-- stdout --
	* Pausing node nospam-206262 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 pause: exit status 80 (2.241024875s)

                                                
                                                
-- stdout --
	* Pausing node nospam-206262 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.26s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 unpause: exit status 80 (1.678254647s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-206262 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 unpause: exit status 80 (2.120950336s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-206262 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 unpause: exit status 80 (1.460731869s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-206262 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:16:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.26s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 stop: (1.318198679s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-206262 --log_dir /tmp/nospam-206262 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (60.7s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-013831 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1210 06:16:58.810047  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:58.816469  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:58.827808  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:58.849144  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:58.890458  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:58.971910  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:59.133424  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:59.454910  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:00.097472  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:01.408858  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:03.971101  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:09.092630  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:17:19.334017  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-013831 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m0.703575523s)
--- PASS: TestFunctional/serial/StartWithProxy (60.70s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.23s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1210 06:17:20.776961  364265 config.go:182] Loaded profile config "functional-013831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-013831 --alsologtostderr -v=8
E1210 06:17:39.815978  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-013831 --alsologtostderr -v=8: (29.22253917s)
functional_test.go:678: soft start took 29.225972776s for "functional-013831" cluster.
I1210 06:17:49.999851  364265 config.go:182] Loaded profile config "functional-013831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (29.23s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-013831 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-013831 cache add registry.k8s.io/pause:3.1: (1.16665045s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-013831 cache add registry.k8s.io/pause:3.3: (1.176315937s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-013831 cache add registry.k8s.io/pause:latest: (1.099773959s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-013831 /tmp/TestFunctionalserialCacheCmdcacheadd_local4174245473/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 cache add minikube-local-cache-test:functional-013831
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 cache delete minikube-local-cache-test:functional-013831
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-013831
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-013831 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (302.941066ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 kubectl -- --context functional-013831 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-013831 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-013831 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1210 06:18:20.778353  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-013831 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.637179838s)
functional_test.go:776: restart took 32.63728137s for "functional-013831" cluster.
I1210 06:18:30.171214  364265 config.go:182] Loaded profile config "functional-013831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (32.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-013831 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-013831 logs: (1.451837923s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 logs --file /tmp/TestFunctionalserialLogsFileCmd3902413992/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-013831 logs --file /tmp/TestFunctionalserialLogsFileCmd3902413992/001/logs.txt: (1.518405525s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-013831 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-013831
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-013831: exit status 115 (385.799551ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31058 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-013831 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-013831 config get cpus: exit status 14 (81.843068ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-013831 config get cpus: exit status 14 (77.670425ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-013831 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-013831 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 391541: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-013831 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-013831 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (308.455224ms)

                                                
                                                
-- stdout --
	* [functional-013831] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:19:07.778198  390889 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:19:07.778315  390889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:19:07.778326  390889 out.go:374] Setting ErrFile to fd 2...
	I1210 06:19:07.778333  390889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:19:07.778700  390889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:19:07.779111  390889 out.go:368] Setting JSON to false
	I1210 06:19:07.780336  390889 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10900,"bootTime":1765336648,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:19:07.780413  390889 start.go:143] virtualization:  
	I1210 06:19:07.784510  390889 out.go:179] * [functional-013831] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:19:07.788197  390889 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:19:07.788380  390889 notify.go:221] Checking for updates...
	I1210 06:19:07.793759  390889 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:19:07.796796  390889 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:19:07.799778  390889 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:19:07.802699  390889 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:19:07.812176  390889 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:19:07.815780  390889 config.go:182] Loaded profile config "functional-013831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:19:07.816356  390889 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:19:07.853428  390889 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:19:07.853666  390889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:19:07.984485  390889 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-10 06:19:07.972339207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:19:07.984584  390889 docker.go:319] overlay module found
	I1210 06:19:07.987641  390889 out.go:179] * Using the docker driver based on existing profile
	I1210 06:19:07.990437  390889 start.go:309] selected driver: docker
	I1210 06:19:07.990454  390889 start.go:927] validating driver "docker" against &{Name:functional-013831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-013831 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:19:07.990569  390889 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:19:07.994043  390889 out.go:203] 
	W1210 06:19:07.996883  390889 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 06:19:07.999718  390889 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-013831 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-013831 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-013831 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (284.884694ms)

                                                
                                                
-- stdout --
	* [functional-013831] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:19:07.513918  390787 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:19:07.514042  390787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:19:07.514051  390787 out.go:374] Setting ErrFile to fd 2...
	I1210 06:19:07.514056  390787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:19:07.515020  390787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:19:07.515440  390787 out.go:368] Setting JSON to false
	I1210 06:19:07.516410  390787 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10900,"bootTime":1765336648,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:19:07.516493  390787 start.go:143] virtualization:  
	I1210 06:19:07.520280  390787 out.go:179] * [functional-013831] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1210 06:19:07.525152  390787 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:19:07.525245  390787 notify.go:221] Checking for updates...
	I1210 06:19:07.529075  390787 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:19:07.532502  390787 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:19:07.535480  390787 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:19:07.538488  390787 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:19:07.541423  390787 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:19:07.547287  390787 config.go:182] Loaded profile config "functional-013831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:19:07.547879  390787 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:19:07.574360  390787 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:19:07.574492  390787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:19:07.678175  390787 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 06:19:07.663928783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:19:07.678295  390787 docker.go:319] overlay module found
	I1210 06:19:07.681529  390787 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 06:19:07.684408  390787 start.go:309] selected driver: docker
	I1210 06:19:07.684436  390787 start.go:927] validating driver "docker" against &{Name:functional-013831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-013831 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:19:07.684557  390787 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:19:07.687976  390787 out.go:203] 
	W1210 06:19:07.690898  390787 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 06:19:07.693875  390787 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-013831 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-013831 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-x6wr5" [4c0951b9-152d-468a-ac95-275c03fe09ff] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-x6wr5" [4c0951b9-152d-468a-ac95-275c03fe09ff] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003431472s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31662
functional_test.go:1680: http://192.168.49.2:31662: success! body:
Request served by hello-node-connect-7d85dfc575-x6wr5

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31662
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [67a25a89-fd91-428d-8184-580a1b7c9c4e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003594081s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-013831 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-013831 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-013831 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-013831 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e7652be1-935c-48e4-9409-4fc5c12fabbb] Pending
helpers_test.go:353: "sp-pod" [e7652be1-935c-48e4-9409-4fc5c12fabbb] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003929252s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-013831 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-013831 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-013831 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [a581f11e-f06b-44d4-b9b3-a6f94989b9af] Pending
helpers_test.go:353: "sp-pod" [a581f11e-f06b-44d4-b9b3-a6f94989b9af] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003665163s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-013831 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh -n functional-013831 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 cp functional-013831:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1440438441/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh -n functional-013831 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh -n functional-013831 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/364265/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "sudo cat /etc/test/nested/copy/364265/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/364265.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "sudo cat /etc/ssl/certs/364265.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/364265.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "sudo cat /usr/share/ca-certificates/364265.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3642652.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "sudo cat /etc/ssl/certs/3642652.pem"
2025/12/10 06:19:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3642652.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "sudo cat /usr/share/ca-certificates/3642652.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-013831 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-013831 ssh "sudo systemctl is-active docker": exit status 1 (413.205082ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-013831 ssh "sudo systemctl is-active containerd": exit status 1 (358.390476ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-013831 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-013831 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-013831 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 388723: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-013831 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-013831 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-013831 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [f2673931-b0c9-4f9b-aa46-45ec79f3edcb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [f2673931-b0c9-4f9b-aa46-45ec79f3edcb] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003772056s
I1210 06:18:46.834585  364265 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-013831 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.8.47 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-013831 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-013831 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-013831 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-c2t7n" [a055e705-da15-4c64-b9b4-5cb15e9931cf] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-c2t7n" [a055e705-da15-4c64-b9b4-5cb15e9931cf] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003329248s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "367.452697ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "54.063773ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "369.16847ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "219.559874ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-013831 /tmp/TestFunctionalparallelMountCmdany-port880630563/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765347540206651549" to /tmp/TestFunctionalparallelMountCmdany-port880630563/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765347540206651549" to /tmp/TestFunctionalparallelMountCmdany-port880630563/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765347540206651549" to /tmp/TestFunctionalparallelMountCmdany-port880630563/001/test-1765347540206651549
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-013831 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (473.584267ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:19:00.683752  364265 retry.go:31] will retry after 495.551201ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 06:19 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 06:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 06:19 test-1765347540206651549
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh cat /mount-9p/test-1765347540206651549
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-013831 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [0eff11f0-14ec-40c2-a371-2e6c6ec0e010] Pending
helpers_test.go:353: "busybox-mount" [0eff11f0-14ec-40c2-a371-2e6c6ec0e010] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [0eff11f0-14ec-40c2-a371-2e6c6ec0e010] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [0eff11f0-14ec-40c2-a371-2e6c6ec0e010] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003172851s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-013831 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-013831 /tmp/TestFunctionalparallelMountCmdany-port880630563/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 service list -o json
functional_test.go:1504: Took "578.149012ms" to run "out/minikube-linux-arm64 -p functional-013831 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32396
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32396
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-013831 /tmp/TestFunctionalparallelMountCmdspecific-port1457788242/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-013831 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (480.72817ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:19:09.095208  364265 retry.go:31] will retry after 324.142029ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-013831 /tmp/TestFunctionalparallelMountCmdspecific-port1457788242/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-013831 ssh "sudo umount -f /mount-9p": exit status 1 (396.606715ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-013831 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-013831 /tmp/TestFunctionalparallelMountCmdspecific-port1457788242/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-013831 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3286057853/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-013831 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3286057853/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-013831 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3286057853/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-013831 ssh "findmnt -T" /mount1: exit status 1 (946.845696ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:19:11.748533  364265 retry.go:31] will retry after 633.624006ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-013831 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-013831 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3286057853/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-013831 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3286057853/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-013831 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3286057853/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.89s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-013831 version -o=json --components: (1.12584082s)
--- PASS: TestFunctional/parallel/Version/components (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-013831 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-013831
localhost/kicbase/echo-server:functional-013831
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-013831 image ls --format short --alsologtostderr:
I1210 06:19:25.177361  393796 out.go:360] Setting OutFile to fd 1 ...
I1210 06:19:25.177594  393796 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:19:25.177627  393796 out.go:374] Setting ErrFile to fd 2...
I1210 06:19:25.177649  393796 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:19:25.177944  393796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
I1210 06:19:25.178714  393796 config.go:182] Loaded profile config "functional-013831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 06:19:25.178872  393796 config.go:182] Loaded profile config "functional-013831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 06:19:25.179444  393796 cli_runner.go:164] Run: docker container inspect functional-013831 --format={{.State.Status}}
I1210 06:19:25.200880  393796 ssh_runner.go:195] Run: systemctl --version
I1210 06:19:25.200930  393796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-013831
I1210 06:19:25.239062  393796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-013831/id_rsa Username:docker}
I1210 06:19:25.363115  393796 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-013831 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 517kB  │
│ docker.io/kicbase/echo-server           │ latest             │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-013831  │ ce2d2cda2d858 │ 4.79MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ cbad6347cca28 │ 54.8MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.3            │ cf65ae6c8f700 │ 84.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 66749159455b3 │ 29MB   │
│ localhost/minikube-local-cache-test     │ functional-013831  │ 940a2590636f7 │ 3.33kB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.3            │ 7ada8ff13e54b │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.3            │ 4461daf6b6af8 │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.3            │ 2f2aa21d34d2d │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-013831 image ls --format table --alsologtostderr:
I1210 06:19:26.188391  394094 out.go:360] Setting OutFile to fd 1 ...
I1210 06:19:26.188583  394094 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:19:26.188596  394094 out.go:374] Setting ErrFile to fd 2...
I1210 06:19:26.188606  394094 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:19:26.188936  394094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
I1210 06:19:26.189632  394094 config.go:182] Loaded profile config "functional-013831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 06:19:26.189980  394094 config.go:182] Loaded profile config "functional-013831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 06:19:26.190696  394094 cli_runner.go:164] Run: docker container inspect functional-013831 --format={{.State.Status}}
I1210 06:19:26.218390  394094 ssh_runner.go:195] Run: systemctl --version
I1210 06:19:26.218451  394094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-013831
I1210 06:19:26.239838  394094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-013831/id_rsa Username:docker}
I1210 06:19:26.349677  394094 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-013831 image ls --format json --alsologtostderr:
[{"id":"4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162","repoDigests":["registry.k8s.io/kube-proxy@sha256:b71d5a937013d93e0e5ed313b3097155865bb887e99432b4c3409850520f1e99"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"75940132"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"517328"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c40
1339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-013831"],"size":"4789170"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51","repoDigests":["gcr.io/k8s-minikube/storage-p
rovisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29035622"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["public.ecr.aws/nginx/nginx@sha256:6224130b55f5d4f555846ebdedec6ce07822ebf205b9c1b77c2fd91abab6eb25","public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"54827372"},{"id":"cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896","repoDigests":["registry.k8s.io/kube-apiserver@sha256:efb2c84df56df82a0e23da97d0d981e79557ec36cea718cadbe28a1e5fca9700"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"84816170"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"52
8622"},{"id":"940a2590636f7d16c9a93fa2e219ebb2c9db4676b8ee5043f99be2c468ce2534","repoDigests":["localhost/minikube-local-cache-test@sha256:dadd55849abb009e1d3cb028e08cc1b56daa3f0745305d9e1f179feb93ff5e18"],"repoTags":["localhost/minikube-local-cache-test:functional-013831"],"size":"3330"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061d
d110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6","repoDigests":["registry.k8s.io/kube-scheduler@sha256:93bcb2715187e1731760836692428d051f892d47c48c81cf8073a2f975661194"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"51589264"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:0d5528182a719e8bf8365b9b21780fccf
4a779fb8159ce4bd327ec0eb7321c59"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73192074"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60854229"},{"id":"7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7efc389f6f9bde99f7060b87feb8525a0557490f5d1468ecb63345501120b0ef"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"72626320"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-013831 image ls --format json --alsologtostderr:
I1210 06:19:25.876421  394011 out.go:360] Setting OutFile to fd 1 ...
I1210 06:19:25.876552  394011 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:19:25.876564  394011 out.go:374] Setting ErrFile to fd 2...
I1210 06:19:25.876570  394011 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:19:25.876870  394011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
I1210 06:19:25.877663  394011 config.go:182] Loaded profile config "functional-013831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 06:19:25.877829  394011 config.go:182] Loaded profile config "functional-013831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 06:19:25.878396  394011 cli_runner.go:164] Run: docker container inspect functional-013831 --format={{.State.Status}}
I1210 06:19:25.905821  394011 ssh_runner.go:195] Run: systemctl --version
I1210 06:19:25.905878  394011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-013831
I1210 06:19:25.929738  394011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-013831/id_rsa Username:docker}
I1210 06:19:26.044535  394011 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-013831 image ls --format yaml --alsologtostderr:
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:f622cb4fcfc2061054bc12f0b65b2087d960e03e16a13bb4070fb6ba6fee7825
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60854229"
- id: cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:efb2c84df56df82a0e23da97d0d981e79557ec36cea718cadbe28a1e5fca9700
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "84816170"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 940a2590636f7d16c9a93fa2e219ebb2c9db4676b8ee5043f99be2c468ce2534
repoDigests:
- localhost/minikube-local-cache-test@sha256:dadd55849abb009e1d3cb028e08cc1b56daa3f0745305d9e1f179feb93ff5e18
repoTags:
- localhost/minikube-local-cache-test:functional-013831
size: "3330"
- id: 4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b71d5a937013d93e0e5ed313b3097155865bb887e99432b4c3409850520f1e99
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "75940132"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:0d5528182a719e8bf8365b9b21780fccf4a779fb8159ce4bd327ec0eb7321c59
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73192074"
- id: 7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7efc389f6f9bde99f7060b87feb8525a0557490f5d1468ecb63345501120b0ef
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "72626320"
- id: 2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:93bcb2715187e1731760836692428d051f892d47c48c81cf8073a2f975661194
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "51589264"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde
repoTags:
- registry.k8s.io/pause:3.10.1
size: "517328"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-013831
size: "4789170"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:6224130b55f5d4f555846ebdedec6ce07822ebf205b9c1b77c2fd91abab6eb25
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54827372"
- id: 66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29035622"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-013831 image ls --format yaml --alsologtostderr:
I1210 06:19:25.474291  393883 out.go:360] Setting OutFile to fd 1 ...
I1210 06:19:25.474481  393883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:19:25.474495  393883 out.go:374] Setting ErrFile to fd 2...
I1210 06:19:25.474501  393883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:19:25.474834  393883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
I1210 06:19:25.475499  393883 config.go:182] Loaded profile config "functional-013831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 06:19:25.475677  393883 config.go:182] Loaded profile config "functional-013831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 06:19:25.476268  393883 cli_runner.go:164] Run: docker container inspect functional-013831 --format={{.State.Status}}
I1210 06:19:25.511133  393883 ssh_runner.go:195] Run: systemctl --version
I1210 06:19:25.511189  393883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-013831
I1210 06:19:25.540900  393883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-013831/id_rsa Username:docker}
I1210 06:19:25.651832  393883 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-013831 ssh pgrep buildkitd: exit status 1 (384.999875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image build -t localhost/my-image:functional-013831 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-013831 image build -t localhost/my-image:functional-013831 testdata/build --alsologtostderr: (3.365636663s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-013831 image build -t localhost/my-image:functional-013831 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7bc93439729
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-013831
--> d68871ac6d5
Successfully tagged localhost/my-image:functional-013831
d68871ac6d5a667d516b92a996cf670c878decbb43213d286b376e7bf1ddf5c6
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-013831 image build -t localhost/my-image:functional-013831 testdata/build --alsologtostderr:
I1210 06:19:26.163187  394090 out.go:360] Setting OutFile to fd 1 ...
I1210 06:19:26.168380  394090 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:19:26.168495  394090 out.go:374] Setting ErrFile to fd 2...
I1210 06:19:26.168518  394090 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:19:26.168955  394090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
I1210 06:19:26.169669  394090 config.go:182] Loaded profile config "functional-013831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 06:19:26.170778  394090 config.go:182] Loaded profile config "functional-013831": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1210 06:19:26.171367  394090 cli_runner.go:164] Run: docker container inspect functional-013831 --format={{.State.Status}}
I1210 06:19:26.200403  394090 ssh_runner.go:195] Run: systemctl --version
I1210 06:19:26.200459  394090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-013831
I1210 06:19:26.221409  394090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-013831/id_rsa Username:docker}
I1210 06:19:26.331855  394090 build_images.go:162] Building image from path: /tmp/build.668635701.tar
I1210 06:19:26.331920  394090 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 06:19:26.340540  394090 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.668635701.tar
I1210 06:19:26.345682  394090 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.668635701.tar: stat -c "%s %y" /var/lib/minikube/build/build.668635701.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.668635701.tar': No such file or directory
I1210 06:19:26.345760  394090 ssh_runner.go:362] scp /tmp/build.668635701.tar --> /var/lib/minikube/build/build.668635701.tar (3072 bytes)
I1210 06:19:26.370211  394090 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.668635701
I1210 06:19:26.379578  394090 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.668635701 -xf /var/lib/minikube/build/build.668635701.tar
I1210 06:19:26.395060  394090 crio.go:315] Building image: /var/lib/minikube/build/build.668635701
I1210 06:19:26.395128  394090 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-013831 /var/lib/minikube/build/build.668635701 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1210 06:19:29.427329  394090 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-013831 /var/lib/minikube/build/build.668635701 --cgroup-manager=cgroupfs: (3.032178585s)
I1210 06:19:29.427409  394090 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.668635701
I1210 06:19:29.435285  394090 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.668635701.tar
I1210 06:19:29.443189  394090 build_images.go:218] Built localhost/my-image:functional-013831 from /tmp/build.668635701.tar
I1210 06:19:29.443231  394090 build_images.go:134] succeeded building to: functional-013831
I1210 06:19:29.443236  394090 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-013831
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image load --daemon kicbase/echo-server:functional-013831 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-013831 image load --daemon kicbase/echo-server:functional-013831 --alsologtostderr: (1.524389596s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image load --daemon kicbase/echo-server:functional-013831 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-013831 image load --daemon kicbase/echo-server:functional-013831 --alsologtostderr: (1.06777364s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-013831
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image load --daemon kicbase/echo-server:functional-013831 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image save kicbase/echo-server:functional-013831 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image rm kicbase/echo-server:functional-013831 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-013831
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 image save --daemon kicbase/echo-server:functional-013831 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-013831
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-013831 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-013831
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-013831
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-013831
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22094-362392/.minikube/files/etc/test/nested/copy/364265/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-253997 cache add registry.k8s.io/pause:3.1: (1.193683358s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-253997 cache add registry.k8s.io/pause:3.3: (1.228121674s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-253997 cache add registry.k8s.io/pause:latest: (1.141101953s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC4078093715/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 cache add minikube-local-cache-test:functional-253997
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 cache delete minikube-local-cache-test:functional-253997
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-253997
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.565911ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi446603339/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 config get cpus: exit status 14 (87.175199ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 config get cpus: exit status 14 (64.171147ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-253997 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-253997 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (190.017908ms)

                                                
                                                
-- stdout --
	* [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:48:56.494377  424645 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:48:56.494556  424645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:48:56.494587  424645 out.go:374] Setting ErrFile to fd 2...
	I1210 06:48:56.494609  424645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:48:56.494897  424645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:48:56.495294  424645 out.go:368] Setting JSON to false
	I1210 06:48:56.496154  424645 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12689,"bootTime":1765336648,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:48:56.496255  424645 start.go:143] virtualization:  
	I1210 06:48:56.499348  424645 out.go:179] * [functional-253997] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:48:56.503076  424645 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:48:56.503177  424645 notify.go:221] Checking for updates...
	I1210 06:48:56.509013  424645 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:48:56.511983  424645 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:48:56.514889  424645 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:48:56.517820  424645 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:48:56.520572  424645 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:48:56.523966  424645 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:48:56.524532  424645 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:48:56.559007  424645 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:48:56.559137  424645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:48:56.614524  424645 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:48:56.605090226 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:48:56.614634  424645 docker.go:319] overlay module found
	I1210 06:48:56.617805  424645 out.go:179] * Using the docker driver based on existing profile
	I1210 06:48:56.620717  424645 start.go:309] selected driver: docker
	I1210 06:48:56.620738  424645 start.go:927] validating driver "docker" against &{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:48:56.620858  424645 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:48:56.624422  424645 out.go:203] 
	W1210 06:48:56.627396  424645 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 06:48:56.630251  424645 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-253997 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-253997 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-253997 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (194.717581ms)

                                                
                                                
-- stdout --
	* [functional-253997] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:48:56.310437  424598 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:48:56.310583  424598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:48:56.310596  424598 out.go:374] Setting ErrFile to fd 2...
	I1210 06:48:56.310603  424598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:48:56.311072  424598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:48:56.311592  424598 out.go:368] Setting JSON to false
	I1210 06:48:56.312722  424598 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12689,"bootTime":1765336648,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1210 06:48:56.312807  424598 start.go:143] virtualization:  
	I1210 06:48:56.316387  424598 out.go:179] * [functional-253997] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1210 06:48:56.319603  424598 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:48:56.319664  424598 notify.go:221] Checking for updates...
	I1210 06:48:56.326845  424598 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:48:56.329754  424598 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	I1210 06:48:56.333287  424598 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	I1210 06:48:56.336231  424598 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:48:56.339110  424598 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:48:56.342325  424598 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1210 06:48:56.342988  424598 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:48:56.363386  424598 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:48:56.363530  424598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:48:56.424925  424598 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:48:56.415812646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:48:56.425030  424598 docker.go:319] overlay module found
	I1210 06:48:56.428147  424598 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 06:48:56.430876  424598 start.go:309] selected driver: docker
	I1210 06:48:56.430915  424598 start.go:927] validating driver "docker" against &{Name:functional-253997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-253997 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:48:56.431015  424598 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:48:56.434656  424598 out.go:203] 
	W1210 06:48:56.437543  424598 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 06:48:56.440384  424598 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh -n functional-253997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 cp functional-253997:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm2050588435/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh -n functional-253997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh -n functional-253997 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/364265/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "sudo cat /etc/test/nested/copy/364265/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/364265.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "sudo cat /etc/ssl/certs/364265.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/364265.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "sudo cat /usr/share/ca-certificates/364265.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3642652.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "sudo cat /etc/ssl/certs/3642652.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3642652.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "sudo cat /usr/share/ca-certificates/3642652.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 ssh "sudo systemctl is-active docker": exit status 1 (274.966311ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 ssh "sudo systemctl is-active containerd": exit status 1 (284.862801ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-253997 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-253997 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "341.114332ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.689043ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "342.568735ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "56.327562ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1572231734/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (360.774804ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:48:49.476291  364265 retry.go:31] will retry after 383.639637ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1572231734/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 ssh "sudo umount -f /mount-9p": exit status 1 (269.572005ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-253997 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1572231734/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4096302824/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4096302824/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4096302824/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 ssh "findmnt -T" /mount1: exit status 1 (519.633106ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:48:51.456749  364265 retry.go:31] will retry after 679.243688ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-253997 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4096302824/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4096302824/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-253997 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4096302824/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-253997 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-253997
localhost/kicbase/echo-server:functional-253997
gcr.io/k8s-minikube/storage-provisioner:v5
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-253997 image ls --format short --alsologtostderr:
I1210 06:49:09.223845  426801 out.go:360] Setting OutFile to fd 1 ...
I1210 06:49:09.223992  426801 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:49:09.224004  426801 out.go:374] Setting ErrFile to fd 2...
I1210 06:49:09.224010  426801 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:49:09.224384  426801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
I1210 06:49:09.225955  426801 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:49:09.226142  426801 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:49:09.226689  426801 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
I1210 06:49:09.243466  426801 ssh_runner.go:195] Run: systemctl --version
I1210 06:49:09.243521  426801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
I1210 06:49:09.260717  426801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
I1210 06:49:09.368202  426801 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-253997 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                │ 66749159455b3 │ 29MB   │
│ localhost/kicbase/echo-server           │ functional-253997 │ ce2d2cda2d858 │ 4.79MB │
│ localhost/minikube-local-cache-test     │ functional-253997 │ 940a2590636f7 │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-rc.1      │ 3c6ba27e07aef │ 85MB   │
│ gcr.io/k8s-minikube/busybox             │ latest            │ 71a676dd070f4 │ 1.63MB │
│ localhost/my-image                      │ functional-253997 │ bf5656f4da5c7 │ 1.64MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1           │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/etcd                    │ 3.6.6-0           │ 271e49a0ebc56 │ 60.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-rc.1      │ a34b3483f25ba │ 72.2MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-rc.1      │ 7e3acea3d87aa │ 74.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-rc.1      │ abca4d5226620 │ 49.8MB │
│ registry.k8s.io/pause                   │ 3.1               │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1            │ d7b100cd9a77b │ 517kB  │
│ registry.k8s.io/pause                   │ 3.3               │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest            │ 8cb2091f603e7 │ 246kB  │
└─────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-253997 image ls --format table --alsologtostderr:
I1210 06:49:13.730906  427290 out.go:360] Setting OutFile to fd 1 ...
I1210 06:49:13.731127  427290 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:49:13.731176  427290 out.go:374] Setting ErrFile to fd 2...
I1210 06:49:13.731204  427290 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:49:13.731591  427290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
I1210 06:49:13.732354  427290 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:49:13.732526  427290 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:49:13.733109  427290 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
I1210 06:49:13.751268  427290 ssh_runner.go:195] Run: systemctl --version
I1210 06:49:13.751328  427290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
I1210 06:49:13.769478  427290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
I1210 06:49:13.872047  427290 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-253997 image ls --format json --alsologtostderr:
[{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"940a2590636f7d16c9a93fa2e219ebb2c9db4676b8ee5043f99be2c468ce2534","repoDigests":["localhost/minikube-local-cache-test@sha256:dadd55849abb009e1d3cb028e08cc1b56daa3f0745305d9e1f179feb93ff5e18"],"repoTags":["localhost/minikube-local-cache-test:functional-253997"],"size":"3330"},{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:c711b5adb8fed0c7e2a62a6c327fd0f8486f90b93c1ebd0ba1e790b930373aae"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"60849030"},{"id":"abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde","repoDigests":["registry.k8s.io/kube-scheduler@sha256:9a2a4c91f5fdaf1a3c5e839f425cc7461b9e24d5f0816c207638df25ade3eda9"],"repoTags":["registry.k8s.io/kube-scheduler:v
1.35.0-rc.1"],"size":"49819792"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"3ce21967ec3250fe375de1cbde3dbe5001bf2eaba294905e01b51b1fe4740dfe","repoDigests":["docker.io/library/7fc4bad524cdbbc9643509e1ce179b157f0d262073bffad49ccc5e97c2463e96-tmp@sha256:9d78cbd9a83f596dc5c780e31c5d54c0f2518f94e5d58afe0995041c9d64a3f4"],"repoTags":[],"size":"1638177"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51","repoDigests":["gcr.io/k8s-minik
ube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29035622"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-253997"],"size":"4788229"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74488375"},{"id":"3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54","repoDigests":["registry.k8s.io/kube-apiserver@sha256:112cff968330968b8d8cc75dd9f232b5b048067a2151629e56b80ff7af621b72"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"85012778"},{"id":"8057e0500773a37cde2cff041eb
13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"bf5656f4da5c769159a8646539e33340e010bdefb9443501859ed9617d854eda","repoDigests":["localhost/my-image@sha256:58a49fff61650276c03f94e60a1b83c9cb6458d368fabd2aeb602c6a19b38f20"],"repoTags":["localhost/my-image:functional-253997"],"size":"1640791"},{"id":"7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e","repoDigests":["registry.k8s.io/kube-proxy@sha256:01f8409e5c04cbb256f29dc118ff184a24b9cbb97c7f6d3bd462333b366566ca"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"74105636"},{"id":"a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:89a8e28214a1c4b4631281e37feaf54299e7510d43c5445d4adc88575390f71e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"72167568"},{"id":"d7
b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"517328"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-253997 image ls --format json --alsologtostderr:
I1210 06:49:13.487203  427252 out.go:360] Setting OutFile to fd 1 ...
I1210 06:49:13.487474  427252 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:49:13.487503  427252 out.go:374] Setting ErrFile to fd 2...
I1210 06:49:13.487522  427252 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:49:13.487834  427252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
I1210 06:49:13.488482  427252 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:49:13.488671  427252 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:49:13.489349  427252 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
I1210 06:49:13.508574  427252 ssh_runner.go:195] Run: systemctl --version
I1210 06:49:13.508677  427252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
I1210 06:49:13.530090  427252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
I1210 06:49:13.635913  427252 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-253997 image ls --format yaml --alsologtostderr:
- id: a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:89a8e28214a1c4b4631281e37feaf54299e7510d43c5445d4adc88575390f71e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "72167568"
- id: 7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:01f8409e5c04cbb256f29dc118ff184a24b9cbb97c7f6d3bd462333b366566ca
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "74105636"
- id: abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:9a2a4c91f5fdaf1a3c5e839f425cc7461b9e24d5f0816c207638df25ade3eda9
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "49819792"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:373d9ff3be95eeecb8d14e5f1ad528b612dbdd990a793b51c5842b450bcce938
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29035622"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-253997
size: "4788229"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:896cb1325b5b89905a93d31caea82d9b650f4801171a7218bd2b15ed92c58bde
repoTags:
- registry.k8s.io/pause:3.10.1
size: "517328"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 940a2590636f7d16c9a93fa2e219ebb2c9db4676b8ee5043f99be2c468ce2534
repoDigests:
- localhost/minikube-local-cache-test@sha256:dadd55849abb009e1d3cb028e08cc1b56daa3f0745305d9e1f179feb93ff5e18
repoTags:
- localhost/minikube-local-cache-test:functional-253997
size: "3330"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1c969ed669ef97056cd5145cf0983af1b7be48ff392798cfbf526392cb4cba80
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74488375"
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:c711b5adb8fed0c7e2a62a6c327fd0f8486f90b93c1ebd0ba1e790b930373aae
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "60849030"
- id: 3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:112cff968330968b8d8cc75dd9f232b5b048067a2151629e56b80ff7af621b72
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "85012778"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-253997 image ls --format yaml --alsologtostderr:
I1210 06:49:09.458203  426838 out.go:360] Setting OutFile to fd 1 ...
I1210 06:49:09.458322  426838 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:49:09.458337  426838 out.go:374] Setting ErrFile to fd 2...
I1210 06:49:09.458343  426838 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:49:09.458669  426838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
I1210 06:49:09.459344  426838 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:49:09.459492  426838 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:49:09.460093  426838 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
I1210 06:49:09.476827  426838 ssh_runner.go:195] Run: systemctl --version
I1210 06:49:09.476893  426838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
I1210 06:49:09.495495  426838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
I1210 06:49:09.599783  426838 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-253997 ssh pgrep buildkitd: exit status 1 (285.81225ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image build -t localhost/my-image:functional-253997 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-253997 image build -t localhost/my-image:functional-253997 testdata/build --alsologtostderr: (3.254365857s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-253997 image build -t localhost/my-image:functional-253997 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3ce21967ec3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-253997
--> bf5656f4da5
Successfully tagged localhost/my-image:functional-253997
bf5656f4da5c769159a8646539e33340e010bdefb9443501859ed9617d854eda
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-253997 image build -t localhost/my-image:functional-253997 testdata/build --alsologtostderr:
I1210 06:49:09.986849  426944 out.go:360] Setting OutFile to fd 1 ...
I1210 06:49:09.987056  426944 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:49:09.987089  426944 out.go:374] Setting ErrFile to fd 2...
I1210 06:49:09.987110  426944 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:49:09.987393  426944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
I1210 06:49:09.988079  426944 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:49:09.988811  426944 config.go:182] Loaded profile config "functional-253997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1210 06:49:09.989464  426944 cli_runner.go:164] Run: docker container inspect functional-253997 --format={{.State.Status}}
I1210 06:49:10.023735  426944 ssh_runner.go:195] Run: systemctl --version
I1210 06:49:10.023800  426944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-253997
I1210 06:49:10.042945  426944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/functional-253997/id_rsa Username:docker}
I1210 06:49:10.148645  426944 build_images.go:162] Building image from path: /tmp/build.2744908254.tar
I1210 06:49:10.148761  426944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 06:49:10.157452  426944 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2744908254.tar
I1210 06:49:10.161628  426944 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2744908254.tar: stat -c "%s %y" /var/lib/minikube/build/build.2744908254.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2744908254.tar': No such file or directory
I1210 06:49:10.161667  426944 ssh_runner.go:362] scp /tmp/build.2744908254.tar --> /var/lib/minikube/build/build.2744908254.tar (3072 bytes)
I1210 06:49:10.181455  426944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2744908254
I1210 06:49:10.189979  426944 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2744908254 -xf /var/lib/minikube/build/build.2744908254.tar
I1210 06:49:10.198896  426944 crio.go:315] Building image: /var/lib/minikube/build/build.2744908254
I1210 06:49:10.198994  426944 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-253997 /var/lib/minikube/build/build.2744908254 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1210 06:49:13.157527  426944 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-253997 /var/lib/minikube/build/build.2744908254 --cgroup-manager=cgroupfs: (2.958504096s)
I1210 06:49:13.157596  426944 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2744908254
I1210 06:49:13.165595  426944 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2744908254.tar
I1210 06:49:13.173214  426944 build_images.go:218] Built localhost/my-image:functional-253997 from /tmp/build.2744908254.tar
I1210 06:49:13.173246  426944 build_images.go:134] succeeded building to: functional-253997
I1210 06:49:13.173252  426944 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-253997
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image load --daemon kicbase/echo-server:functional-253997 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-253997 image load --daemon kicbase/echo-server:functional-253997 --alsologtostderr: (1.118812352s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image load --daemon kicbase/echo-server:functional-253997 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-253997
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image load --daemon kicbase/echo-server:functional-253997 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image save kicbase/echo-server:functional-253997 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image rm kicbase/echo-server:functional-253997 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-253997
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 image save --daemon kicbase/echo-server:functional-253997 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-253997
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-253997 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-253997
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-253997
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-253997
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (175.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1210 06:51:52.315520  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:51:52.321941  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:51:52.333315  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:51:52.354711  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:51:52.396109  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:51:52.477453  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:51:52.638924  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:51:52.960499  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:51:53.602576  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:51:54.884217  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:51:57.445474  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:51:58.798952  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:52:02.567468  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:52:12.808731  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:52:33.290117  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:14.251431  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:38.175790  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-602341 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m54.898186319s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (175.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-602341 kubectl -- rollout status deployment/busybox: (3.737170702s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- exec busybox-7b57f96db7-mzng9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- exec busybox-7b57f96db7-swmzp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- exec busybox-7b57f96db7-xd69d -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- exec busybox-7b57f96db7-mzng9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- exec busybox-7b57f96db7-swmzp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- exec busybox-7b57f96db7-xd69d -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- exec busybox-7b57f96db7-mzng9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- exec busybox-7b57f96db7-swmzp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- exec busybox-7b57f96db7-xd69d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- exec busybox-7b57f96db7-mzng9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- exec busybox-7b57f96db7-mzng9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- exec busybox-7b57f96db7-swmzp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- exec busybox-7b57f96db7-swmzp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- exec busybox-7b57f96db7-xd69d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 kubectl -- exec busybox-7b57f96db7-xd69d -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (62.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 node add --alsologtostderr -v 5
E1210 06:54:36.180493  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-602341 node add --alsologtostderr -v 5: (1m1.080420832s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-602341 status --alsologtostderr -v 5: (1.163004887s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (62.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-602341 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.095718605s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-602341 status --output json --alsologtostderr -v 5: (1.099316579s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp testdata/cp-test.txt ha-602341:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2482733212/001/cp-test_ha-602341.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341:/home/docker/cp-test.txt ha-602341-m02:/home/docker/cp-test_ha-602341_ha-602341-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m02 "sudo cat /home/docker/cp-test_ha-602341_ha-602341-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341:/home/docker/cp-test.txt ha-602341-m03:/home/docker/cp-test_ha-602341_ha-602341-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m03 "sudo cat /home/docker/cp-test_ha-602341_ha-602341-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341:/home/docker/cp-test.txt ha-602341-m04:/home/docker/cp-test_ha-602341_ha-602341-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m04 "sudo cat /home/docker/cp-test_ha-602341_ha-602341-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp testdata/cp-test.txt ha-602341-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2482733212/001/cp-test_ha-602341-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341-m02:/home/docker/cp-test.txt ha-602341:/home/docker/cp-test_ha-602341-m02_ha-602341.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341 "sudo cat /home/docker/cp-test_ha-602341-m02_ha-602341.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341-m02:/home/docker/cp-test.txt ha-602341-m03:/home/docker/cp-test_ha-602341-m02_ha-602341-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m03 "sudo cat /home/docker/cp-test_ha-602341-m02_ha-602341-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341-m02:/home/docker/cp-test.txt ha-602341-m04:/home/docker/cp-test_ha-602341-m02_ha-602341-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m04 "sudo cat /home/docker/cp-test_ha-602341-m02_ha-602341-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp testdata/cp-test.txt ha-602341-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2482733212/001/cp-test_ha-602341-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341-m03:/home/docker/cp-test.txt ha-602341:/home/docker/cp-test_ha-602341-m03_ha-602341.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341 "sudo cat /home/docker/cp-test_ha-602341-m03_ha-602341.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341-m03:/home/docker/cp-test.txt ha-602341-m02:/home/docker/cp-test_ha-602341-m03_ha-602341-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m02 "sudo cat /home/docker/cp-test_ha-602341-m03_ha-602341-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341-m03:/home/docker/cp-test.txt ha-602341-m04:/home/docker/cp-test_ha-602341-m03_ha-602341-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m04 "sudo cat /home/docker/cp-test_ha-602341-m03_ha-602341-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp testdata/cp-test.txt ha-602341-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2482733212/001/cp-test_ha-602341-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341-m04:/home/docker/cp-test.txt ha-602341:/home/docker/cp-test_ha-602341-m04_ha-602341.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341 "sudo cat /home/docker/cp-test_ha-602341-m04_ha-602341.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341-m04:/home/docker/cp-test.txt ha-602341-m02:/home/docker/cp-test_ha-602341-m04_ha-602341-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m02 "sudo cat /home/docker/cp-test_ha-602341-m04_ha-602341-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 cp ha-602341-m04:/home/docker/cp-test.txt ha-602341-m03:/home/docker/cp-test_ha-602341-m04_ha-602341-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 ssh -n ha-602341-m03 "sudo cat /home/docker/cp-test_ha-602341-m04_ha-602341-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-602341 node stop m02 --alsologtostderr -v 5: (12.048204139s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-602341 status --alsologtostderr -v 5: exit status 7 (833.773074ms)

                                                
                                                
-- stdout --
	ha-602341
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-602341-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-602341-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-602341-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:55:37.341370  445397 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:55:37.341552  445397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:55:37.341563  445397 out.go:374] Setting ErrFile to fd 2...
	I1210 06:55:37.341569  445397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:55:37.341826  445397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:55:37.342003  445397 out.go:368] Setting JSON to false
	I1210 06:55:37.342034  445397 mustload.go:66] Loading cluster: ha-602341
	I1210 06:55:37.342092  445397 notify.go:221] Checking for updates...
	I1210 06:55:37.342438  445397 config.go:182] Loaded profile config "ha-602341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:55:37.342461  445397 status.go:174] checking status of ha-602341 ...
	I1210 06:55:37.343019  445397 cli_runner.go:164] Run: docker container inspect ha-602341 --format={{.State.Status}}
	I1210 06:55:37.400671  445397 status.go:371] ha-602341 host status = "Running" (err=<nil>)
	I1210 06:55:37.400698  445397 host.go:66] Checking if "ha-602341" exists ...
	I1210 06:55:37.400971  445397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-602341
	I1210 06:55:37.421164  445397 host.go:66] Checking if "ha-602341" exists ...
	I1210 06:55:37.421528  445397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:55:37.421573  445397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-602341
	I1210 06:55:37.442355  445397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/ha-602341/id_rsa Username:docker}
	I1210 06:55:37.552226  445397 ssh_runner.go:195] Run: systemctl --version
	I1210 06:55:37.559586  445397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:55:37.573882  445397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:55:37.634582  445397 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-10 06:55:37.624021057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:55:37.635159  445397 kubeconfig.go:125] found "ha-602341" server: "https://192.168.49.254:8443"
	I1210 06:55:37.635205  445397 api_server.go:166] Checking apiserver status ...
	I1210 06:55:37.635255  445397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:55:37.647762  445397 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup
	I1210 06:55:37.656602  445397 api_server.go:182] apiserver freezer: "9:freezer:/docker/cb5faee3080da248366e38092efc024d5dd0b4ef307ccfddaf89a13f3f683e0f/crio/crio-fef8084b498774abe93294c91c28c74ec4f835e5e2c6c818d414e7404c5314ee"
	I1210 06:55:37.656691  445397 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cb5faee3080da248366e38092efc024d5dd0b4ef307ccfddaf89a13f3f683e0f/crio/crio-fef8084b498774abe93294c91c28c74ec4f835e5e2c6c818d414e7404c5314ee/freezer.state
	I1210 06:55:37.670500  445397 api_server.go:204] freezer state: "THAWED"
	I1210 06:55:37.670526  445397 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 06:55:37.679227  445397 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 06:55:37.679256  445397 status.go:463] ha-602341 apiserver status = Running (err=<nil>)
	I1210 06:55:37.679267  445397 status.go:176] ha-602341 status: &{Name:ha-602341 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:55:37.679292  445397 status.go:174] checking status of ha-602341-m02 ...
	I1210 06:55:37.679611  445397 cli_runner.go:164] Run: docker container inspect ha-602341-m02 --format={{.State.Status}}
	I1210 06:55:37.703880  445397 status.go:371] ha-602341-m02 host status = "Stopped" (err=<nil>)
	I1210 06:55:37.703915  445397 status.go:384] host is not running, skipping remaining checks
	I1210 06:55:37.703922  445397 status.go:176] ha-602341-m02 status: &{Name:ha-602341-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:55:37.703942  445397 status.go:174] checking status of ha-602341-m03 ...
	I1210 06:55:37.704255  445397 cli_runner.go:164] Run: docker container inspect ha-602341-m03 --format={{.State.Status}}
	I1210 06:55:37.726242  445397 status.go:371] ha-602341-m03 host status = "Running" (err=<nil>)
	I1210 06:55:37.726264  445397 host.go:66] Checking if "ha-602341-m03" exists ...
	I1210 06:55:37.726747  445397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-602341-m03
	I1210 06:55:37.746853  445397 host.go:66] Checking if "ha-602341-m03" exists ...
	I1210 06:55:37.747173  445397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:55:37.747229  445397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-602341-m03
	I1210 06:55:37.774543  445397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/ha-602341-m03/id_rsa Username:docker}
	I1210 06:55:37.887018  445397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:55:37.901524  445397 kubeconfig.go:125] found "ha-602341" server: "https://192.168.49.254:8443"
	I1210 06:55:37.901554  445397 api_server.go:166] Checking apiserver status ...
	I1210 06:55:37.901597  445397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:55:37.913401  445397 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1962/cgroup
	I1210 06:55:37.922497  445397 api_server.go:182] apiserver freezer: "9:freezer:/docker/fe7e7489f6fbf57e345aee89a5609af0377b6e9847c95078d409a1716d39c93f/crio/crio-55a4de4d935628f7154464a37d60bb5811e280456963cef985a2a7caa8ffff5a"
	I1210 06:55:37.922631  445397 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fe7e7489f6fbf57e345aee89a5609af0377b6e9847c95078d409a1716d39c93f/crio/crio-55a4de4d935628f7154464a37d60bb5811e280456963cef985a2a7caa8ffff5a/freezer.state
	I1210 06:55:37.930908  445397 api_server.go:204] freezer state: "THAWED"
	I1210 06:55:37.930937  445397 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 06:55:37.940831  445397 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 06:55:37.940862  445397 status.go:463] ha-602341-m03 apiserver status = Running (err=<nil>)
	I1210 06:55:37.940872  445397 status.go:176] ha-602341-m03 status: &{Name:ha-602341-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:55:37.940906  445397 status.go:174] checking status of ha-602341-m04 ...
	I1210 06:55:37.941255  445397 cli_runner.go:164] Run: docker container inspect ha-602341-m04 --format={{.State.Status}}
	I1210 06:55:37.960022  445397 status.go:371] ha-602341-m04 host status = "Running" (err=<nil>)
	I1210 06:55:37.960051  445397 host.go:66] Checking if "ha-602341-m04" exists ...
	I1210 06:55:37.960392  445397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-602341-m04
	I1210 06:55:37.978136  445397 host.go:66] Checking if "ha-602341-m04" exists ...
	I1210 06:55:37.978499  445397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:55:37.978552  445397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-602341-m04
	I1210 06:55:37.997918  445397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/ha-602341-m04/id_rsa Username:docker}
	I1210 06:55:38.103467  445397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:55:38.117736  445397 status.go:176] ha-602341-m04 status: &{Name:ha-602341-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (29.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-602341 node start m02 --alsologtostderr -v 5: (27.882831742s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-602341 status --alsologtostderr -v 5: (1.334964708s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (29.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.413256989s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (123.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-602341 stop --alsologtostderr -v 5: (27.876133394s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 start --wait true --alsologtostderr -v 5
E1210 06:56:41.249056  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:56:52.316288  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:56:58.798991  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:57:20.021904  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-602341 start --wait true --alsologtostderr -v 5: (1m35.308508308s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (123.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-602341 node delete m03 --alsologtostderr -v 5: (11.095405813s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 stop --alsologtostderr -v 5
E1210 06:58:38.177071  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-602341 stop --alsologtostderr -v 5: (36.033473981s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-602341 status --alsologtostderr -v 5: exit status 7 (124.899196ms)

                                                
                                                
-- stdout --
	ha-602341
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-602341-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-602341-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:59:02.139763  457386 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:59:02.140009  457386 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:59:02.140072  457386 out.go:374] Setting ErrFile to fd 2...
	I1210 06:59:02.140094  457386 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:59:02.140373  457386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 06:59:02.140609  457386 out.go:368] Setting JSON to false
	I1210 06:59:02.140696  457386 mustload.go:66] Loading cluster: ha-602341
	I1210 06:59:02.140763  457386 notify.go:221] Checking for updates...
	I1210 06:59:02.141241  457386 config.go:182] Loaded profile config "ha-602341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 06:59:02.141298  457386 status.go:174] checking status of ha-602341 ...
	I1210 06:59:02.141907  457386 cli_runner.go:164] Run: docker container inspect ha-602341 --format={{.State.Status}}
	I1210 06:59:02.161280  457386 status.go:371] ha-602341 host status = "Stopped" (err=<nil>)
	I1210 06:59:02.161305  457386 status.go:384] host is not running, skipping remaining checks
	I1210 06:59:02.161312  457386 status.go:176] ha-602341 status: &{Name:ha-602341 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:59:02.161343  457386 status.go:174] checking status of ha-602341-m02 ...
	I1210 06:59:02.161646  457386 cli_runner.go:164] Run: docker container inspect ha-602341-m02 --format={{.State.Status}}
	I1210 06:59:02.193464  457386 status.go:371] ha-602341-m02 host status = "Stopped" (err=<nil>)
	I1210 06:59:02.193488  457386 status.go:384] host is not running, skipping remaining checks
	I1210 06:59:02.193495  457386 status.go:176] ha-602341-m02 status: &{Name:ha-602341-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:59:02.193513  457386 status.go:174] checking status of ha-602341-m04 ...
	I1210 06:59:02.193822  457386 cli_runner.go:164] Run: docker container inspect ha-602341-m04 --format={{.State.Status}}
	I1210 06:59:02.211885  457386 status.go:371] ha-602341-m04 host status = "Stopped" (err=<nil>)
	I1210 06:59:02.211909  457386 status.go:384] host is not running, skipping remaining checks
	I1210 06:59:02.211917  457386 status.go:176] ha-602341-m04 status: &{Name:ha-602341-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (95.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-602341 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m34.671417569s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (95.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (67.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-602341 node add --control-plane --alsologtostderr -v 5: (1m6.06056558s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-602341 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-602341 status --alsologtostderr -v 5: (1.284101261s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (67.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.127928355s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.13s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-933033 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1210 07:01:58.799114  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-933033 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (59.362276475s)
--- PASS: TestJSONOutput/start/Command (59.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-933033 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-933033 --output=json --user=testUser: (5.830340768s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-445801 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-445801 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (96.127875ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bca707b7-543e-4307-b84b-bc5857bab19d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-445801] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"177ccff4-8719-42d3-8722-66baffc6c3a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22094"}}
	{"specversion":"1.0","id":"6efe1cf7-3438-4237-a947-7abcb3ea2b8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"75f4c3f5-af89-498d-8270-223067cdb780","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig"}}
	{"specversion":"1.0","id":"107b2d8e-bf4c-4e4b-974d-d12ef6db3487","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube"}}
	{"specversion":"1.0","id":"7d84726c-c331-4d4f-9fd9-007ecc9a1dbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d924ca37-b0b8-495b-8e79-c2ca6fa3c57c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"db673f17-1e3c-4d4d-aadb-5f50e1075933","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-445801" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-445801
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-077809 --network=
E1210 07:03:38.177165  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-077809 --network=: (41.04546522s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-077809" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-077809
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-077809: (2.228284638s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.30s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (41.41s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-560881 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-560881 --network=bridge: (39.275951445s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-560881" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-560881
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-560881: (2.11093802s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (41.41s)

                                                
                                    
x
+
TestKicExistingNetwork (43.14s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1210 07:04:35.300868  364265 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1210 07:04:35.317026  364265 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1210 07:04:35.317102  364265 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1210 07:04:35.317120  364265 cli_runner.go:164] Run: docker network inspect existing-network
W1210 07:04:35.334456  364265 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1210 07:04:35.334491  364265 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1210 07:04:35.334505  364265 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1210 07:04:35.334612  364265 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 07:04:35.352753  364265 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9731135ae282 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:e0:de:21:5b:05} reservation:<nil>}
I1210 07:04:35.353154  364265 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b6d230}
I1210 07:04:35.353177  364265 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1210 07:04:35.353252  364265 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1210 07:04:35.422566  364265 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-593284 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-593284 --network=existing-network: (40.823370755s)
helpers_test.go:176: Cleaning up "existing-network-593284" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-593284
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-593284: (2.165036906s)
I1210 07:05:18.427550  364265 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (43.14s)

                                                
                                    
x
+
TestKicCustomSubnet (41.82s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-468855 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-468855 --subnet=192.168.60.0/24: (39.4216546s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-468855 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-468855" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-468855
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-468855: (2.379521736s)
--- PASS: TestKicCustomSubnet (41.82s)

                                                
                                    
x
+
TestKicStaticIP (39.13s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-987538 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-987538 --static-ip=192.168.200.200: (36.652875828s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-987538 ip
helpers_test.go:176: Cleaning up "static-ip-987538" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-987538
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-987538: (2.309597999s)
--- PASS: TestKicStaticIP (39.13s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (92.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-397700 --driver=docker  --container-runtime=crio
E1210 07:06:41.914718  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:06:52.317693  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:06:58.798956  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-397700 --driver=docker  --container-runtime=crio: (42.225076915s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-400457 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-400457 --driver=docker  --container-runtime=crio: (43.970656917s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-397700
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-400457
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-400457" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-400457
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-400457: (2.09238318s)
helpers_test.go:176: Cleaning up "first-397700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-397700
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-397700: (2.474376086s)
--- PASS: TestMinikubeProfile (92.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-899802 --memory=3072 --mount-string /tmp/TestMountStartserial362134701/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1210 07:08:15.387415  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-899802 --memory=3072 --mount-string /tmp/TestMountStartserial362134701/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.031393293s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-899802 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-901690 --memory=3072 --mount-string /tmp/TestMountStartserial362134701/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-901690 --memory=3072 --mount-string /tmp/TestMountStartserial362134701/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.742889179s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-901690 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.76s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-899802 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-899802 --alsologtostderr -v=5: (1.758005638s)
--- PASS: TestMountStart/serial/DeleteFirst (1.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-901690 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-901690
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-901690: (1.299075275s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-901690
E1210 07:08:38.177624  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-901690: (7.40736992s)
--- PASS: TestMountStart/serial/RestartStopped (8.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-901690 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (89.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-393672 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-393672 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m29.181758333s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (89.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-393672 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-393672 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-393672 -- rollout status deployment/busybox: (3.331468813s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-393672 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-393672 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-393672 -- exec busybox-7b57f96db7-kzlf8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-393672 -- exec busybox-7b57f96db7-vgqs9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-393672 -- exec busybox-7b57f96db7-kzlf8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-393672 -- exec busybox-7b57f96db7-vgqs9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-393672 -- exec busybox-7b57f96db7-kzlf8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-393672 -- exec busybox-7b57f96db7-vgqs9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.07s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-393672 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-393672 -- exec busybox-7b57f96db7-kzlf8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-393672 -- exec busybox-7b57f96db7-kzlf8 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-393672 -- exec busybox-7b57f96db7-vgqs9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-393672 -- exec busybox-7b57f96db7-vgqs9 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (31.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-393672 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-393672 -v=5 --alsologtostderr: (31.046545431s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (31.77s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-393672 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 cp testdata/cp-test.txt multinode-393672:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 cp multinode-393672:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3196131059/001/cp-test_multinode-393672.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 cp multinode-393672:/home/docker/cp-test.txt multinode-393672-m02:/home/docker/cp-test_multinode-393672_multinode-393672-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672-m02 "sudo cat /home/docker/cp-test_multinode-393672_multinode-393672-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 cp multinode-393672:/home/docker/cp-test.txt multinode-393672-m03:/home/docker/cp-test_multinode-393672_multinode-393672-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672-m03 "sudo cat /home/docker/cp-test_multinode-393672_multinode-393672-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 cp testdata/cp-test.txt multinode-393672-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 cp multinode-393672-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3196131059/001/cp-test_multinode-393672-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 cp multinode-393672-m02:/home/docker/cp-test.txt multinode-393672:/home/docker/cp-test_multinode-393672-m02_multinode-393672.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672 "sudo cat /home/docker/cp-test_multinode-393672-m02_multinode-393672.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 cp multinode-393672-m02:/home/docker/cp-test.txt multinode-393672-m03:/home/docker/cp-test_multinode-393672-m02_multinode-393672-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672-m03 "sudo cat /home/docker/cp-test_multinode-393672-m02_multinode-393672-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 cp testdata/cp-test.txt multinode-393672-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 cp multinode-393672-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3196131059/001/cp-test_multinode-393672-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 cp multinode-393672-m03:/home/docker/cp-test.txt multinode-393672:/home/docker/cp-test_multinode-393672-m03_multinode-393672.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672 "sudo cat /home/docker/cp-test_multinode-393672-m03_multinode-393672.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 cp multinode-393672-m03:/home/docker/cp-test.txt multinode-393672-m02:/home/docker/cp-test_multinode-393672-m03_multinode-393672-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 ssh -n multinode-393672-m02 "sudo cat /home/docker/cp-test_multinode-393672-m03_multinode-393672-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-393672 node stop m03: (1.346268776s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-393672 status: exit status 7 (569.579927ms)

                                                
                                                
-- stdout --
	multinode-393672
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-393672-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-393672-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-393672 status --alsologtostderr: exit status 7 (559.770823ms)

                                                
                                                
-- stdout --
	multinode-393672
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-393672-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-393672-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:11:05.454580  514165 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:11:05.454701  514165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:11:05.454711  514165 out.go:374] Setting ErrFile to fd 2...
	I1210 07:11:05.454717  514165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:11:05.454988  514165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 07:11:05.455174  514165 out.go:368] Setting JSON to false
	I1210 07:11:05.455206  514165 mustload.go:66] Loading cluster: multinode-393672
	I1210 07:11:05.455622  514165 config.go:182] Loaded profile config "multinode-393672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:11:05.455645  514165 status.go:174] checking status of multinode-393672 ...
	I1210 07:11:05.456158  514165 cli_runner.go:164] Run: docker container inspect multinode-393672 --format={{.State.Status}}
	I1210 07:11:05.456404  514165 notify.go:221] Checking for updates...
	I1210 07:11:05.477141  514165 status.go:371] multinode-393672 host status = "Running" (err=<nil>)
	I1210 07:11:05.477166  514165 host.go:66] Checking if "multinode-393672" exists ...
	I1210 07:11:05.477496  514165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-393672
	I1210 07:11:05.503280  514165 host.go:66] Checking if "multinode-393672" exists ...
	I1210 07:11:05.503603  514165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:11:05.503656  514165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-393672
	I1210 07:11:05.525536  514165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33284 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/multinode-393672/id_rsa Username:docker}
	I1210 07:11:05.631616  514165 ssh_runner.go:195] Run: systemctl --version
	I1210 07:11:05.639331  514165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:11:05.660690  514165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:11:05.718689  514165 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-10 07:11:05.709009724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:11:05.719270  514165 kubeconfig.go:125] found "multinode-393672" server: "https://192.168.67.2:8443"
	I1210 07:11:05.719309  514165 api_server.go:166] Checking apiserver status ...
	I1210 07:11:05.719351  514165 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:05.731690  514165 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1970/cgroup
	I1210 07:11:05.740223  514165 api_server.go:182] apiserver freezer: "9:freezer:/docker/9c6258b06eed412ef9b4c1448f27cb6aeab43c2f8028b8eb6d7fff9b8ccd94bf/crio/crio-8681de8d7c626cb9059956883fafca802565bf90082a706478ed8135b1b6cfda"
	I1210 07:11:05.740290  514165 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9c6258b06eed412ef9b4c1448f27cb6aeab43c2f8028b8eb6d7fff9b8ccd94bf/crio/crio-8681de8d7c626cb9059956883fafca802565bf90082a706478ed8135b1b6cfda/freezer.state
	I1210 07:11:05.748674  514165 api_server.go:204] freezer state: "THAWED"
	I1210 07:11:05.748706  514165 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1210 07:11:05.757146  514165 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1210 07:11:05.757176  514165 status.go:463] multinode-393672 apiserver status = Running (err=<nil>)
	I1210 07:11:05.757309  514165 status.go:176] multinode-393672 status: &{Name:multinode-393672 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 07:11:05.757340  514165 status.go:174] checking status of multinode-393672-m02 ...
	I1210 07:11:05.757648  514165 cli_runner.go:164] Run: docker container inspect multinode-393672-m02 --format={{.State.Status}}
	I1210 07:11:05.776375  514165 status.go:371] multinode-393672-m02 host status = "Running" (err=<nil>)
	I1210 07:11:05.776405  514165 host.go:66] Checking if "multinode-393672-m02" exists ...
	I1210 07:11:05.776787  514165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-393672-m02
	I1210 07:11:05.795453  514165 host.go:66] Checking if "multinode-393672-m02" exists ...
	I1210 07:11:05.795782  514165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:11:05.795827  514165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-393672-m02
	I1210 07:11:05.814002  514165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/22094-362392/.minikube/machines/multinode-393672-m02/id_rsa Username:docker}
	I1210 07:11:05.919007  514165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:11:05.932047  514165 status.go:176] multinode-393672-m02 status: &{Name:multinode-393672-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1210 07:11:05.932091  514165 status.go:174] checking status of multinode-393672-m03 ...
	I1210 07:11:05.932400  514165 cli_runner.go:164] Run: docker container inspect multinode-393672-m03 --format={{.State.Status}}
	I1210 07:11:05.950278  514165 status.go:371] multinode-393672-m03 host status = "Stopped" (err=<nil>)
	I1210 07:11:05.950304  514165 status.go:384] host is not running, skipping remaining checks
	I1210 07:11:05.950312  514165 status.go:176] multinode-393672-m03 status: &{Name:multinode-393672-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.48s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-393672 node start m03 -v=5 --alsologtostderr: (8.335333485s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-393672
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-393672
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-393672: (25.156882232s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-393672 --wait=true -v=5 --alsologtostderr
E1210 07:11:52.317374  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:11:58.799574  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-393672 --wait=true -v=5 --alsologtostderr: (51.424647006s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-393672
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-393672 node delete m03: (4.861165299s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-393672 stop: (23.836794108s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-393672 status: exit status 7 (90.794805ms)

                                                
                                                
-- stdout --
	multinode-393672
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-393672-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-393672 status --alsologtostderr: exit status 7 (97.402877ms)

                                                
                                                
-- stdout --
	multinode-393672
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-393672-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:13:01.401156  522180 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:13:01.401457  522180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:13:01.401501  522180 out.go:374] Setting ErrFile to fd 2...
	I1210 07:13:01.401529  522180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:13:01.402054  522180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 07:13:01.402405  522180 out.go:368] Setting JSON to false
	I1210 07:13:01.402499  522180 mustload.go:66] Loading cluster: multinode-393672
	I1210 07:13:01.403403  522180 config.go:182] Loaded profile config "multinode-393672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:13:01.403874  522180 status.go:174] checking status of multinode-393672 ...
	I1210 07:13:01.403990  522180 notify.go:221] Checking for updates...
	I1210 07:13:01.404713  522180 cli_runner.go:164] Run: docker container inspect multinode-393672 --format={{.State.Status}}
	I1210 07:13:01.421638  522180 status.go:371] multinode-393672 host status = "Stopped" (err=<nil>)
	I1210 07:13:01.421667  522180 status.go:384] host is not running, skipping remaining checks
	I1210 07:13:01.421676  522180 status.go:176] multinode-393672 status: &{Name:multinode-393672 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 07:13:01.421712  522180 status.go:174] checking status of multinode-393672-m02 ...
	I1210 07:13:01.422022  522180 cli_runner.go:164] Run: docker container inspect multinode-393672-m02 --format={{.State.Status}}
	I1210 07:13:01.443009  522180 status.go:371] multinode-393672-m02 host status = "Stopped" (err=<nil>)
	I1210 07:13:01.443027  522180 status.go:384] host is not running, skipping remaining checks
	I1210 07:13:01.443033  522180 status.go:176] multinode-393672-m02 status: &{Name:multinode-393672-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-393672 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1210 07:13:21.251332  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:13:38.175596  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-393672 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (57.078114064s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-393672 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.79s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-393672
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-393672-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-393672-m02 --driver=docker  --container-runtime=crio: exit status 14 (109.547595ms)

                                                
                                                
-- stdout --
	* [multinode-393672-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-393672-m02' is duplicated with machine name 'multinode-393672-m02' in profile 'multinode-393672'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-393672-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-393672-m03 --driver=docker  --container-runtime=crio: (41.851482654s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-393672
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-393672: exit status 80 (340.30397ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-393672 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-393672-m03 already exists in multinode-393672-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-393672-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-393672-m03: (2.074812716s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.43s)

                                                
                                    
x
+
TestPreload (122.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-618733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-618733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (59.963567151s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-618733 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-618733 image pull gcr.io/k8s-minikube/busybox: (2.188307193s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-618733
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-618733: (5.98837906s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-618733 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-618733 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (51.249956029s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-618733 image list
helpers_test.go:176: Cleaning up "test-preload-618733" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-618733
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-618733: (2.489610175s)
--- PASS: TestPreload (122.13s)

                                                
                                    
x
+
TestScheduledStopUnix (119.48s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-132328 --memory=3072 --driver=docker  --container-runtime=crio
E1210 07:16:52.316504  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:16:58.799598  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-132328 --memory=3072 --driver=docker  --container-runtime=crio: (43.054656654s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-132328 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 07:17:33.218880  537707 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:17:33.219045  537707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:17:33.219055  537707 out.go:374] Setting ErrFile to fd 2...
	I1210 07:17:33.219061  537707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:17:33.219303  537707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 07:17:33.219546  537707 out.go:368] Setting JSON to false
	I1210 07:17:33.219662  537707 mustload.go:66] Loading cluster: scheduled-stop-132328
	I1210 07:17:33.220016  537707 config.go:182] Loaded profile config "scheduled-stop-132328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:17:33.220131  537707 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/config.json ...
	I1210 07:17:33.220320  537707 mustload.go:66] Loading cluster: scheduled-stop-132328
	I1210 07:17:33.220443  537707 config.go:182] Loaded profile config "scheduled-stop-132328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-132328 -n scheduled-stop-132328
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-132328 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 07:17:33.683563  537797 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:17:33.683850  537797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:17:33.683881  537797 out.go:374] Setting ErrFile to fd 2...
	I1210 07:17:33.683923  537797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:17:33.684271  537797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 07:17:33.684641  537797 out.go:368] Setting JSON to false
	I1210 07:17:33.684893  537797 daemonize_unix.go:73] killing process 537723 as it is an old scheduled stop
	I1210 07:17:33.689340  537797 mustload.go:66] Loading cluster: scheduled-stop-132328
	I1210 07:17:33.689796  537797 config.go:182] Loaded profile config "scheduled-stop-132328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:17:33.689879  537797 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/config.json ...
	I1210 07:17:33.690059  537797 mustload.go:66] Loading cluster: scheduled-stop-132328
	I1210 07:17:33.690172  537797 config.go:182] Loaded profile config "scheduled-stop-132328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1210 07:17:33.695783  364265 retry.go:31] will retry after 116.566µs: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
I1210 07:17:33.696556  364265 retry.go:31] will retry after 99.341µs: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
I1210 07:17:33.697699  364265 retry.go:31] will retry after 288.263µs: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
I1210 07:17:33.698837  364265 retry.go:31] will retry after 479.002µs: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
I1210 07:17:33.699973  364265 retry.go:31] will retry after 253.564µs: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
I1210 07:17:33.701100  364265 retry.go:31] will retry after 879.973µs: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
I1210 07:17:33.702218  364265 retry.go:31] will retry after 1.648672ms: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
I1210 07:17:33.704425  364265 retry.go:31] will retry after 1.711831ms: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
I1210 07:17:33.706639  364265 retry.go:31] will retry after 2.770753ms: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
I1210 07:17:33.709861  364265 retry.go:31] will retry after 4.82663ms: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
I1210 07:17:33.715392  364265 retry.go:31] will retry after 3.442888ms: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
I1210 07:17:33.719616  364265 retry.go:31] will retry after 12.701895ms: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
I1210 07:17:33.735612  364265 retry.go:31] will retry after 8.026886ms: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
I1210 07:17:33.744195  364265 retry.go:31] will retry after 14.99479ms: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
I1210 07:17:33.759472  364265 retry.go:31] will retry after 20.738619ms: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
I1210 07:17:33.780710  364265 retry.go:31] will retry after 26.505639ms: open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-132328 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-132328 -n scheduled-stop-132328
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-132328
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-132328 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 07:17:59.639181  538294 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:17:59.639344  538294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:17:59.639356  538294 out.go:374] Setting ErrFile to fd 2...
	I1210 07:17:59.639361  538294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:17:59.639597  538294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-362392/.minikube/bin
	I1210 07:17:59.639907  538294 out.go:368] Setting JSON to false
	I1210 07:17:59.640011  538294 mustload.go:66] Loading cluster: scheduled-stop-132328
	I1210 07:17:59.640381  538294 config.go:182] Loaded profile config "scheduled-stop-132328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1210 07:17:59.640456  538294 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/scheduled-stop-132328/config.json ...
	I1210 07:17:59.640647  538294 mustload.go:66] Loading cluster: scheduled-stop-132328
	I1210 07:17:59.640765  538294 config.go:182] Loaded profile config "scheduled-stop-132328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1210 07:18:38.175414  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-132328
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-132328: exit status 7 (71.595007ms)

                                                
                                                
-- stdout --
	scheduled-stop-132328
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-132328 -n scheduled-stop-132328
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-132328 -n scheduled-stop-132328: exit status 7 (69.63404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-132328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-132328
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-132328: (4.810154365s)
--- PASS: TestScheduledStopUnix (119.48s)

                                                
                                    
x
+
TestInsufficientStorage (9.28s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-825065 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-825065 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.94361774s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9e0f5888-b478-4cb3-89f9-6cb84d4c3fae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-825065] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd483353-8c40-42ee-885d-95705d05da22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22094"}}
	{"specversion":"1.0","id":"43f8ef43-ea5a-40f8-9de7-e25b5edbe3c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7a5976b7-fff5-4ad4-a4e0-4753c46437fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig"}}
	{"specversion":"1.0","id":"dc87260d-35ec-43d0-864f-cba8e206b492","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube"}}
	{"specversion":"1.0","id":"d4e3bda8-471f-4fd1-b510-caf601971e47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"739e4874-41f4-4adf-bb03-7ba4009b8953","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e88596c1-5d5e-4de4-b835-112c20124839","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"59862e2a-28c4-4b32-b061-8f34c65ca45f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"eb743c65-229f-436e-84bd-12ebd0745023","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6a58904-aa7b-486b-8e79-da0c3d3d4630","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ab71135b-5e9d-4ea0-ab3b-57f8ef83afb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-825065\" primary control-plane node in \"insufficient-storage-825065\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d170c73f-76f9-4ddf-a7c8-ca1444d9ee6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"bef2aa1e-dc46-4147-a3fa-7d92d767e3bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a7ba7da-e839-4b68-830b-305690477e39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-825065 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-825065 --output=json --layout=cluster: exit status 7 (295.003271ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-825065","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-825065","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:18:56.821170  540111 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-825065" does not appear in /home/jenkins/minikube-integration/22094-362392/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-825065 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-825065 --output=json --layout=cluster: exit status 7 (320.653702ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-825065","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-825065","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:18:57.141844  540178 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-825065" does not appear in /home/jenkins/minikube-integration/22094-362392/kubeconfig
	E1210 07:18:57.152036  540178 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/insufficient-storage-825065/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-825065" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-825065
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-825065: (1.714420931s)
--- PASS: TestInsufficientStorage (9.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (301.26s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.202738936 start -p running-upgrade-044448 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1210 07:26:52.315935  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.202738936 start -p running-upgrade-044448 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.964027116s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-044448 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1210 07:26:58.798692  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:28:38.175481  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:30:01.259464  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-044448 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.104751095s)
helpers_test.go:176: Cleaning up "running-upgrade-044448" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-044448
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-044448: (1.988274083s)
--- PASS: TestRunningBinaryUpgrade (301.26s)

                                                
                                    
x
+
TestMissingContainerUpgrade (121.56s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2525141064 start -p missing-upgrade-507679 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2525141064 start -p missing-upgrade-507679 --memory=3072 --driver=docker  --container-runtime=crio: (1m8.46175263s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-507679
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-507679
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-507679 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-507679 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.778451626s)
helpers_test.go:176: Cleaning up "missing-upgrade-507679" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-507679
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-507679: (2.58146749s)
--- PASS: TestMissingContainerUpgrade (121.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-673350 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-673350 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (95.828629ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-673350] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-362392/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-362392/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (61.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-673350 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-673350 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m0.499346112s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-673350 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (61.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-673350 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-673350 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (17.047443644s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-673350 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-673350 status -o json: exit status 2 (310.185973ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-673350","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-673350
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-673350: (2.010997564s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-673350 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-673350 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.227993701s)
--- PASS: TestNoKubernetes/serial/Start (8.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22094-362392/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-673350 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-673350 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.148539ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-673350
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-673350: (1.43683749s)
--- PASS: TestNoKubernetes/serial/Stop (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-673350 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-673350 --driver=docker  --container-runtime=crio: (8.766401342s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-673350 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-673350 "sudo systemctl is-active --quiet service kubelet": exit status 1 (349.506696ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (313.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.4015134883 start -p stopped-upgrade-051989 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.4015134883 start -p stopped-upgrade-051989 --memory=3072 --vm-driver=docker  --container-runtime=crio: (42.641144016s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.4015134883 -p stopped-upgrade-051989 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.4015134883 -p stopped-upgrade-051989 stop: (1.443855595s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-051989 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1210 07:21:52.316319  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:58.798905  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:23:21.916961  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:23:38.175463  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-013831/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:24:55.388896  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-051989 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.966657883s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (313.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-051989
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-051989: (1.994833305s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.99s)

                                                
                                    
x
+
TestPause/serial/Start (63.42s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-541318 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1210 07:31:52.315325  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/functional-253997/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:31:58.799001  364265 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-362392/.minikube/profiles/addons-241520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-541318 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m3.419219881s)
--- PASS: TestPause/serial/Start (63.42s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-541318 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-541318 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.4899013s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.51s)

                                                
                                    

Test skip (33/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
13 TestDownloadOnly/v1.34.3/preload-exists 0.15
16 TestDownloadOnly/v1.34.3/kubectl 0
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0.06
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1210 06:09:24.240537  364265 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
W1210 06:09:24.338550  364265 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 status code: 404
W1210 06:09:24.391350  364265 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-arm64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.34.3/preload-exists (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1210 06:09:28.760287  364265 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
W1210 06:09:28.805658  364265 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
W1210 06:09:28.819861  364265 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-arm64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard